Cognitive Web Accessibility: Assistive Technology 2012

Published in 2012, these resources are original studies, literature reviews and related articles that cite references.

  • Design and evaluation of classifier for identifying sign language videos in video sharing sites
    "In this paper, we describe the design and evaluation of a classifier for distinguishing between sign language videos and other videos. A test collection of SL videos and videos likely to be incorrectly recognized as SL videos (likely false positives) was created for evaluating alternative classifiers. Five video features thought to be potentially valuable for this task were developed based on common video analysis techniques. A comparison of the relative value of the five video features shows that a measure of the symmetry of movement relative to the face is the best feature for distinguishing sign language videos. Overall, an SVM classifier provided with all five features achieves 82% precision and 90% recall when tested on the challenging test collection. The performance would be considerably higher when applied to the more varied collections of large video sharing sites."
  • Evaluation of dynamic image pre-compensation for computer users with severe refractive error
    "This paper describes a new pre-compensation method to counter the visual blurring caused by the severe refractive errors of a specific computer user. It preprocesses the pictorial information through dynamic pre-compensation in advance, aiming to present customized images on the basis of the ocular aberrations of the specific computer user. The new method improves the previous static pre-compensation method by updating the aberration data according to pupil size variations, in real-time. The real-time aberration data enable us to generate better suited pre-compensated images, as the pre-compensation model is updated dynamically. An empirical study was conducted to evaluate the efficiency of the new pre-compensation method, through an icon recognition test. From the results of statistical analysis, we found that participants achieved significantly higher accuracy levels in recognizing the icons with dynamic pre-compensation, than when viewing the original icons. The accuracy is also significantly boosted when the icons were processed with dynamic pre-compensation method, in comparison with the previous static pre-compensation method."
  • Effect of presenting video as a baseline during an american sign language animation user study
    "Our lab has conducted several prior studies to evaluate synthesized ASL animations by asking native signers to watch different versions of animations and to answer comprehension and subjective questions about them. As an upper baseline, we used an animation of a virtual human carefully created by a human animator who is a native ASL signer. Considering whether to instead use videos of human signers as an upper baseline, we wanted to quantify how including a video upper baseline would affect how participants evaluate the ASL animations presented in a study. In this paper, we replicate a user study we conducted two years ago, with one difference: replacing our original animation upper baseline with a video of a human signer. We found that adding a human video upper baseline depressed the subjective Likert-scale scores that participants assign to the other stimuli (the synthesized animations) in the study when viewed side-by-side. This paper provides methodological guidance for how to design user studies evaluating sign language animations and facilitates comparison of studies that have used different upper baselines."
  • "So that's what you see": building understanding with personalized simulations of colour vision deficiency
    "Simulation tools can help provide this experience; however, current simulations are based on general models that have several limitations, and therefore cannot accurately reflect the perceptual capabilities of most individuals with reduced colour vision. To address this problem, we have developed a new simulation approach that is based on a specific empirical model of the actual colour perception abilities of a person with CVD. The resulting simulation is therefore a more exact representation of what a particular person with CVD actually sees. We tested the new approach in two ways. First, we compared its accuracy with that of the existing models, and found that the personalized simulations were significantly more accurate than the old method. Second, we asked pairs of participants (one with CVD, and one close friend or family member without CVD) to discuss images of everyday scenes that had been simulated with the CVD person's particular model. We found that the personalized simulations provided new insights into the details of the CVD person's experience. The personalized-simulation approach shows great promise for improving understanding of CVD (and potentially other conditions) for people with ordinary perceptual abilities."
  • PassChords: secure multi-touch authentication for blind people
    "We interviewed 13 blind smartphone users and found that most participants were unaware of or not concerned about potential security threats. Not a single participant used optional authentication methods such as a password-protected screen lock. We addressed the high risk of unauthorized user access by developing PassChords, a non-visual authentication method for touch surfaces that is robust to aural and visual eavesdropping. A user enters a PassChord by tapping several times on a touch surface with one or more fingers. The set of fingers used in each tap defines the password. We give preliminary evidence that a four-tap PassChord has about the same entropy, a measure of password strength, as a four-digit personal identification number (PIN) used in the iPhone's Passcode Lock. We conducted a study with 16 blind participants that showed that PassChords were nearly three times as fast as iPhone's Passcode Lock with VoiceOver, suggesting that PassChords are a viable accessible authentication method for touch screens."
  • Designing for individuals: usable touch-screen interaction through shared user models
    "In this paper we present an evaluation of the use of shared user modelling and adaptive interfaces to improve the accessibility of mobile touch-screen technologies. By using abilities based information collected through application use and continually updating the user model and interface adaptations, it is easy for users to make applications aware of their needs and preferences. Three smart phone apps were created for this study and tested with 12 adults who had diverse visual and motor impairments. Results indicated significant benefits from the shared user models that can automatically adapt interfaces, across applications, to address usability needs."
  • Online quality control for real-time crowd captioning
    "In this paper, we present methods for quickly identifying workers who are producing good partial captions and estimating the quality of their input. We evaluate these methods in experiments run on Mechanical Turk in which a total of 42 workers captioned 20 minutes of audio. The methods introduced in this paper were able to raise overall accuracy from 57.8% to 81.22% while keeping coverage of the ground truth signal nearly unchanged."
  • Crowdsourcing subjective fashion advice using VizWiz: challenges and opportunities
    "We describe our findings of a diary study with people with vision impairments that revealed the many accessibility barriers fashion presents, and how an online survey revealed that clothing decisions are often made collaboratively, regardless of visual ability. Based on these findings, we identified a need for a collaborative and real-time environment for fashion advice. We have tested the feasibility of providing this advice through crowdsourcing using VizWiz, a mobile phone application where participants receive nearly real-time answers to visual questions. Our pilot study results show that this application has the potential to address a great need within the blind community, but remaining challenges include improving photo capture and assembling a set of crowd workers with the requisite expertise. More broadly our research highlights the feasibility of using crowdsourcing for subjective, opinion-based advice."
  • Elderly text-entry performance on touchscreens
    "In this paper we examine text-entry performance and typing patterns of elderly users on touch-based devices. Moreover, we analyze users' hand tremor profile and its relationship to typing behavior. Our main goal is to inform future designs of touchscreen keyboards for elderly people. To this end, we asked 15 users to enter text under two device conditions (mobile and tablet) and measured their performance, both speed- and accuracy-wise. Additionally, we thoroughly analyze different types of errors (insertions, substitutions, and omissions) looking at touch input features and their main causes. Results show that omissions are the most common error type, mainly due to cognitive errors, followed by substitutions and insertions. While tablet devices can compensate for about 9% of typing errors, omissions are similar across conditions. Measured hand tremor largely correlates with text-entry errors, suggesting that it should be approached to improve input accuracy. Finally, we assess the effect of simple touch models and provide implications to design."
  • Exploration and avoidance of surrounding obstacles for the visually impaired
    "Proximity-based interaction through a long cane is essential for the blind and the visually impaired. We designed and implemented an obstacle detector consisting of a 3D Time-of-Flight (TOF) camera and a planar tactile display to extend the interaction range and provide rich non-visual information about the environment. Users choose a better path after acquiring the spatial layout of obstacles than with a white cane alone. A user study with 6 blind people was analyzed and showed extra time is needed to ensure safe walking while reading the layout. Both hanging and ground-based obstacles were circumvented. Tactile mapping information has been designed for representation of precise spatial information around a blind user."
  • Learning non-visual graphical information using a touch-based vibro-audio interface
    "This paper evaluates an inexpensive and intuitive approach for providing non-visual access to graphic material, called a vibro-audio interface. The system works by allowing users to freely explore graphical information on the touchscreen of a commercially available tablet and synchronously triggering vibration patterns and auditory information whenever an on-screen visual element is touched. Three studies were conducted that assessed legibility and comprehension of the relative relations and global structure of a bar graph (Exp 1), Pattern recognition via a letter identification task (Exp 2), and orientation discrimination of geometric shapes (Exp 3). Performance with the touch-based device was compared to the same tasks performed using standard hardcopy tactile graphics. Results showed similar error performance between modes for all measures, indicating that the vibro-audio interface is a viable multimodal solution for providing access to dynamic visual information and supporting accurate spatial learning and the development of mental representations of graphical material."
  • Helping visually impaired users properly aim a camera
    "We evaluate three interaction modes to assist visually impaired users during the camera aiming process: speech, tone, and silent feedback. Our main assumption is that users are able to spatially localize what they want to photograph, and roughly aim the camera in the appropriate direction. Thus, small camera motions are sufficient for obtaining a good composition. Results in the context of documenting accessibility barriers related to public transportation show that audio feedback is valuable. Visually impaired users were not affected by audio feedback in terms of social comfort. Furthermore, we observed trends in favor of speech over tone, including higher ratings for ease of use. This study reinforces earlier work that suggests users who are blind or low vision find assisted photography appealing and useful."
  • A readability evaluation of real-time crowd captions in the classroom
    "We ran a study to evaluate the readability of captions generated by a new crowd captioning approach versus professional captionists and automatic speech recognition (ASR). In this approach, captions are typed by classmates into a system that aligns and merges the multiple incomplete caption streams into a single, comprehensive real-time transcript. Our study asked 48 deaf and hearing readers to evaluate transcripts produced by a professional captionist, ASR and crowd captioning software respectively and found the readers preferred crowd captions over professional captions and ASR.
  • Detecting linguistic HCI markers in an online aphasia support group
    "In this study we asked whether the well-documented language deficits associated with aphasia can be detected in online writing of people with aphasia. We analyzed 150 messages (14,754 words) posted to an online aphasia support forum, by six people with aphasia and by four controls. Significant linguistic differences between people with aphasia and controls were detected, suggesting five putative linguistic HCI markers for aphasia. These findings suggest that interdisciplinary research on communication disorders and CMC has both applied and theoretical implications."
  • iSCAN: a phoneme-based predictive communication aid for nonspeaking individuals
    "In this paper, we investigate whether prediction techniques can be employed to improve the usability of such systems. We have developed iSCAN, a phoneme-based predictive communication system, which offers phoneme prediction and phoneme-based word prediction. A pilot study with 16 able-bodied participants showed that our predictive methods led to a 108.4% increase in phoneme entry speed and a 79.0% reduction in phoneme error rate. The benefits of the predictive methods were also demonstrated in a case study with a cerebral palsied participant. Moreover, results of a comparative evaluation conducted with the same participant after 16 sessions using iSCAN indicated that our system outperformed an orthographic-based predictive communication device that the participant has used for over 4 years."
  • Design recommendations for tv user interfaces for older adults: findings from the eCAALYX project
    "While guidelines for designing websites and iTV applications for older adults exist, no previous work has suggested how to best design TV user interfaces (UIs) that are accessible to older adults. Building upon pertinent guidelines from related areas, this paper presents thirteen recommendations for designing UIs for TV applications for older adults. These recommendations are the result of iterative design, testing, and development of a TV-based health system for older adults that aims to provide a holistic solution to improve quality of life for older adults with chronic conditions by fostering their autonomy and reducing hospitalization costs. The authors' work and experience shows that widely known UI design guidelines unsurprisingly apply to the design of TV-based applications for older adults, but acquire a crucial importance in this context."
  • What we talk about: designing a context-aware communication tool for people with aphasia
    "In this paper, we describe the design of TalkAbout, a context-aware, adaptive AAC system that provides users with a word list that is adapted to their current location and conversation partner. We describe the design and development of TalkAbout, which we conducted in collaboration with 5 adults with aphasia. We then present guidelines for developing and evaluating context-aware technology for people with aphasia."
  • Considerations for technology that support physical activity by older adults
    "In this paper, we present a set of needs that technology can address, and considerations for designing technology interventions that support physical activity by older adults."
  • Basic senior personas: a representative design tool covering the spectrum of European older adults
    "This paper introduces the development of a set of 30 basic senior personas, covering a broad range of characteristics of European older adults, following a quantitative development approach. The aim of this tool is to support researchers and developers in extending empathy for their target users when developing ICT solutions for the benefit of older adults. The main innovation lies in the representativeness of the basic senior personas. The personas build on multifaceted quantitative data from a single source including micro-level information from roughly 12,500 older individuals living in different European countries. The resulting personas may be applied in their basic form but are extendable to specific contexts. Also, the suggested tool addresses the drawbacks of current existing personas describing older adults: being representative and cost-efficient. The basic senior personas, a filter tool, a manual and templates for "persona marketing" articles are available for free online under"
  • Thematic organization of web content for distraction-free text-to-speech narration
    "In this paper, we describe a new technique for identifying thematic segments by tightly coupling visual, structural, and linguistic features present in the content. A notable aspect of the technique is that it produces segments with very little irrelevant content. Another interesting aspect is that the clutter-free main content of a web page, that is produced by the Readability tool and the "Reader" feature of the Safari browser, emerges as a special case of the thematic segments created by our technique. We provide experimental evidence of the effectiveness of our technique in reducing clutter. We also describe a user study with 23 blind subjects of its impact on web accessibility."
  • Capture: a desktop display-centric text recorder
    "We present Capture, a novel display-centric text recorder that facilitates real-time access to onscreen text and its structure and contextual information, including data associated with both foreground and background windows. Capture provides an intelligent caching architecture that integrates with the standard accessibility framework available on modern operating systems to continuously track onscreen text and metadata. This enables fast, semantic information recording without any modifications to applications, window systems, or operating system kernels. The recorded data is useful for a variety of problem domains, including assistive technologies, desktop search, auditing, and predictive graphical user interfaces. We have implemented a Capture prototype on Linux with the GNOME Accessibility Toolkit. Our results on real desktop applications demonstrate that Capture provides low runtime overhead and much more complete recording of onscreen text than modern desktop screen readers used for visually impaired users."
  • Back navigation shortcuts for screen reader users
    "When screen reader users need to back track pages to re-find previously visited content, they are forced to listen to some portion of each unwanted page to recognize it. This makes aural back navigation inefficient, especially on large websites. To address this problem, we introduce topic- and list-based back: two navigation strategies that provide back browsing shortcuts by leveraging the conceptual structure of content-rich websites. Both are manifested in Webtime, an accessible website on the history of the Web. A controlled study (N=10) conducted at the Indiana School for the Blind and Visually Impaired compared topic- and list-based back to traditional back mechanisms while participants completed fact-finding tasks. Topic- and list-based back significantly decreased time-on-task and number of backtracked pages; the navigation shortcuts were also associated with positive improvements in perceived cognitive effort and navigation experience. The proposed strategies can operate as a supplement to current back mechanisms in information-rich websites."
  • Cognitive Function and Assistive Technology for Cognition: A Systematic Review
    "The relationship between assistive technology for cognition (ATC) and cognitive function was examined using a systematic review. A literature search identified 89 publications reporting 91 studies of an ATC intervention in a clinical population. The WHO International Classification of Functioning, Disability and Health (ICF) was used to categorize the cognitive domains being assisted and the tasks being performed. Results show that ATC have been used to effectively support cognitive functions relating to attention, calculation, emotion, experience of self, higher level cognitive functions (planning and time management) and memory. The review makes three contributions: (1) It reviews existing ATC in terms of cognitive function, thus providing a framework for ATC prescription on the basis of a profile of cognitive deficits, (2) it introduces a new classification of ATC based on cognitive function, and (3) it identifies areas for future ATC research and development."
  • Mainstream but Specialized: Mobile Technology for Cognitive Support in Education
    "In this study, two software development projects were introduced to support timekeeping and reading for students with cognitive disabilities using mainstream mobile technology. In the first project, two versions of a countdown timer were developed that showed the remaining time graphically, by the area size. The ebook reader developed in the second project offered students the chance to point to a phrase and have it read aloud with a highlight box around the characters. It was important for the students to have a digital replica of the printed textbook being used at the same time by others in the class. The study highlighted a key consideration for assistive technology development for those with cognitive disabilities: that of the essential balance between technical features and human skills, such as the system's ease of use, look and feel as well as cognitive adaptation, whilst applying mainstream technology to the provision of specialized support. The study also showed that solutions to time and reading difficulties should be considered in relation to available technology and the surroundings of the users"

Note: If no resources are displayed above, please refresh this page.

Visit The Clear Helper Blog: Developing best practices of Web accessibility for people with intellectual / cognitive disabilities.