Cognitive Web Accessibility: Assistive Technology 2013

Published in 2013, these resources are original studies, literature reviews and related articles that cite references.

  • "Pray before you step out": describing personal and situational blind navigation behaviors
    "We conducted a formative study exploring how people with vision impairments used technology to support navigation. Our findings from interviews with 30 adults with vision impairments included insights about experiences in Orientation & Mobility (O&M) training, everyday navigation challenges, helpful and unhelpful technologies, and the role of social interactions while navigating. We produced a set of categorical data that future technologists can use to identify user requirements and usage scenarios. These categories consist of Personality and Scenario attributes describing navigation behaviors of people with vision impairments. We demonstrate the usefulness of these attributes by introducing navigation-style personas backed by our data. This work demonstrates the complex choices individuals with vision impairments undergo when leaving their home, and the many factors that affect their navigation behavior."
  • Wheelchair-based game design for older adults
    "In our work, we address the design of wheelchair-accessible motion-based games. We present KINECTWheels, a toolkit designed to integrate wheelchair movements into motion-based games, and Cupcake Heaven, a wheelchair-based video game designed for older adults using wheelchairs. Results of two studies show that KINECTWheels can be applied to make motion-based games wheelchair-accessible, and that wheelchair-based games engage older adults. Through the application of the wheelchair as an enabling technology in play, our work has the potential of encouraging older adults to develop a positive relationship with their wheelchair."
  • What health topics older adults want to track: a participatory design study
    "We conducted a participatory design study, where 5 groups of older adults created 5 designs. Four groups identified at least 1 health metric not currently offered in either the iPhone app store or the Google Play store. At the end of the sessions we administered a questionnaire to determine what health topics participants would like to track via smartphone or tablet. The designs included 13 health topics that were not on the questionnaire. Seventeen of eighteen participants expressed interest in tracking health metrics using a smartphone/tablet despite having little experience with these devices. This shows that older adults have unique ideas that are not being considered by current technology designers. We conclude with recommendations for future development, and propose continuing to involve to older adults in participatory design."
  • Mixed local and remote participation in teleconferences from a deaf and hard of hearing perspective
    "In this experience report we describe the accessibility challenges that deaf and hard of hearing committee members faced while collaborating with a larger group of hearing committee members over a period of 2½ years. We explain what some recurring problems are, how audio-only conferences fall short even when relay services and interpreters are available, and how we devised a videoconferencing setup using FuzeMeeting to minimize the accessibility barriers. We also describe some best practices, as well as lessons learned, and pitfalls to avoid in deploying this type of setup."
  • How someone with a neuromuscular disease experiences operating a PC (and how to successfully counteract that)
    "This paper describes the experiences of the first author, who has been diagnosed with the neuromuscular disease Friedreich's Ataxia more than 25 years ago, with the innovative approach to human-computer interaction characterized by the software tool OnScreenDualScribe. Originally developed by (and for!) the first author, the tool replaces the standard input devices -- i.e., keyboard and mouse -- with a small numerical keypad, making optimal use of his abilities. The paper attempts to illustrate some of the difficulties the first author usually has to face when operating a computer, due to considerable motor problems. It will be shown what he tried in the past, and why OnScreenDualScribe, offering various assistive techniques -- including word prediction, an ambiguous keyboard, and stepwise pointing operations -- is indeed a viable alternative. The ultimate goal is to help not only one single person, but to make the system -- which does not accelerate entry very much, but clearly reduces the required effort -- available to anyone with similar conditions."
  • Visual complexity, player experience, performance and physical exertion in motion-based games for older adults
    "We decompose the player experience of older adults engaging with motion-based games, focusing on the effects of manipulations of the game representation through the visual channel (visual complexity), since it is the primary interaction modality of most games and since vision impairments are common amongst older adults. We examine the effects of different levels of visual complexity on player experience, performance, and exertion in a study with fifteen participants. Our results show that visual complexity affects the way games are perceived in two ways: First, while older adults do have preferences in terms of visual complexity of video games, notable effects were only measurable following drastic variations. Second, perceived exertion shifts depending on the degree of visual complexity. These findings can help inform the design of motion-based games for therapy and rehabilitation for older adults."
  • Uncovering information needs for independent spatial learning for users who are visually impaired
    "This paper examines the practices that visually impaired individuals use to learn about their environments and the associated challenges. In the first of our two studies, we uncover four types of information needed to master and navigate the environment. We detail how individuals' context impacts their ability to learn this information, and outline requirements for independent spatial learning. In a second study, we explore how individuals learn about places and activities in their environment. Our findings show that users not only learn information to satisfy their immediate needs, but also to enable future opportunities -- something existing technologies do not fully support. From these findings, we discuss future research and design opportunities to assist the visually impaired in independent spatial learning."
  • UbiBraille: designing and evaluating a vibrotactile Braille-reading device
    " In this paper, we present a vibrotactile reading device that leverages the users' Braille knowledge to read textual information. UbiBraille consists of six vibrotactile actuators that are used to code a Braille cell and communicate single characters. The device is able to simultaneously actuate the users' index, middle, and ring fingers of both hands, providing fast and mnemonic output. We conducted two user studies on UbiBraille to assess both character and word reading performance. Character recognition rates ranged from 54% to 100% and were highly character- and user-dependent. Indeed, participants with greater expertise in Braille reading/writing were able to take advantage of this knowledge and achieve higher accuracy rates. Regarding word reading performance, we investigated four different vibrotactile timing conditions. Participants were able to read entire words and obtained recognition rates up to 93% with the most proficient ones being able achieve a rate of 1 character per second."
  • Touchplates: low-cost tactile overlays for visually impaired touch screen users
    "We introduce touchplates, carefully designed tactile guides that provide tactile feedback for touch screens in the form of physical guides that are overlaid on the screen and recognized by the underlying application. Unlike prior approaches to integrating tactile feedback with touch screens, touchplates are implemented with simple plastics and use standard touch screen software, making them versatile and inexpensive. Touchplates may be customized to suit individual users and applications, and may be produced on a laser cutter, 3D printer, or made by hand. We describe the design and implementation of touchplates, a "starter kit" of touchplates, and feedback from a formative evaluation with 9 people with visual impairments. Touchplates provide a low-cost, adaptable, and accessible method of adding tactile feedback to touch screen interfaces."
  • Safe walking technology for people with dementia: what do they want?
    "This paper presents an attempt to understand how safe walking technology can be designed to fit the needs of people with dementia. Taking inspiration from modern dementia care philosophy, and its emphasis on the individual with dementia, we have performed in-depth investigations of three persons' experiences of living with early-stage dementia. From interviews and co-design workshops with them and their family caregivers, we identified several factors that influence people with dementia's attitudes toward safe walking technology, and how they want the technology to assist them. Relevant factors include: The desire for control and self-management, the subjective experiences of symptoms, personal routines and skills, empathy for care-givers, and the local environment in which they live. Based on these findings, we argue there is a need to reconsider "surveillance" as a concept on which to base design of safe walking technology. We also discuss implications for design ethics."
  • IncluCity: using contextual cues to raise awareness on environmental accessibility
    "In this paper we demonstrate that contextual cues can enhance people's perception and understanding of accessibility. We describe a two-week study where our participants submitted reports of inaccessible spots all over the city through a web application. Using a 2x2 factorial design we contrast the impact of two types of contextual cues, visual cues (i.e., displaying a picture of the inaccessible spot) and location cues (i.e., ability to zoom-in the exact location). We measure participants' perceptions of accessibility and how they are challenged to consider their own limitations and barriers that may also affect themselves in certain circumstances. Our results suggest that visual cues led to a bigger sense of urgency while also improving participants' attitude towards disability."
  • Real time object scanning using a mobile phone and cloud-based visual search engine
    "In this paper, we present Scan Search, a mobile application that offers a new way for blind people to take high-quality photos to support recognition tasks. To support realtime scanning of objects, we developed a key frame extraction algorithm that automatically retrieves high-quality frames from continuous camera video stream of mobile phones. Those key frames are streamed to a cloud-based recognition engine that identifies the most significant object inside the picture. This way, blind users can scan for objects of interest and hear potential results in real time. We also present a study exploring the tradeoffs in how many photos are sent, and conduct a user study with 8 blind participants that compares Scan Search with a standard photo-snapping interface. Our results show that Scan Search allows users to capture objects of interest more efficiently and is preferred by users to the standard interface."
  • Physical accessibility of touchscreen smartphones
    "This paper examines the use of touchscreen smartphones, focusing on physical access. Using interviews and observations, we found that participants with dexterity impairment considered a smartphone both useful and usable, but tablet devices offer several important advantages. Cost is a major barrier to adoption. We describe usability problems that are not addressed by existing accessibility options, and observe that the dexterity demands of important accessibility features made them unusable for many participants. Despite participants' enthusiasm for both smartphones and tablet devices, their potential is not yet fully realized for this population."
  • Answering visual questions with conversational crowd assistants
    "In this paper, we introduce Chorus:View, a system that assists users over the course of longer interactions by engaging workers in a continuous conversation with the user about a video stream from the user's mobile device. We demonstrate the benefit of using multiple crowd workers instead of just one in terms of both latency and accuracy, then conduct a study with 10 blind users that shows Chorus:View answers common visual questions more quickly and accurately than existing approaches. We conclude with a discussion of users' feedback and potential future work on interactive crowd support of blind users."
  • Improving public transit accessibility for blind riders by crowdsourcing bus stop landmark locations with Google street view
    "In this paper, we introduce and evaluate a new scalable method for collecting bus stop location and landmark descriptions by combining online crowdsourcing and Google Street View (GSV). We conduct and report on three studies in particular: (i) a formative interview study of 18 people with visual impairments to inform the design of our crowdsourcing tool; (ii) a comparative study examining differences between physical bus stop audit data and audits conducted virtually with GSV; and (iii) an online study of 153 crowd workers on Amazon Mechanical Turk to examine the feasibility of crowdsourcing bus stop audits using our custom tool with GSV. Our findings reemphasize the importance of landmarks in non-visual navigation, demonstrate that GSV is a viable bus stop audit dataset, and show that minimally trained crowd workers can find and identify bus stop landmarks with 82.5% accuracy across 150 bus stop locations (87.3% with simple quality control)."
  • Improved inference and autotyping in EEG-based BCI typing systems
    "The RSVP Keyboard™ is a brain-computer interface (BCI)-based typing system for people with severe physical disabilities, specifically those with locked-in syndrome (LIS). It uses signals from an electroencephalogram (EEG) combined with information from an n-gram language model to select letters to be typed. One characteristic of the system as currently configured is that it does not keep track of past EEG observations, i.e., observations of user intent made while the user was in a different part of a typed message. We present a principled approach for taking all past observations into account, and show that this method results in a 20% increase in simulated typing speed under a variety of conditions on realistic stimuli. We also show that this method allows for a principled and improved estimate of the probability of the backspace symbol, by which mis-typed symbols are corrected. Finally, we demonstrate the utility of automatically typing likely letters in certain contexts, a technique that achieves increased typing speed under our new method, though not under the baseline approach."
  • Good fonts for dyslexia
    "In this paper, we present the first experiment that uses eye-tracking to measure the effect of font type on reading speed. Using a within-subject design, 48 subjects with dyslexia read 12 texts with 12 different fonts. Sans serif, monospaced and roman font styles significantly improved the reading performance over serif, proportional and italic fonts. On the basis of our results, we present a set of more accessible fonts for people with dyslexia."
  • Follow that sound: using sonification and corrective verbal feedback to teach touchscreen gestures
    "We propose and evaluate two techniques to teach touchscreen gestures to users with visual impairments: (1) corrective verbal feedback using text-to-speech and automatic analysis of the user's drawn gesture; (2) gesture sonification to generate sound based on finger touches, creating an audio representation of a gesture. To refine and evaluate the techniques, we conducted two controlled lab studies. The first study, with 12 sighted participants, compared parameters for sonifying gestures in an eyes-free scenario and identified pitch + stereo panning as the best combination. In the second study, 6 blind and low-vision participants completed gesture replication tasks with the two feedback techniques. Subjective data and preliminary performance findings indicate that the techniques offer complementary advantages."
  • Eyes-free yoga: an exergame using depth cameras for blind & low vision exercise
    "We developed Eyes-Free Yoga, an exergame using the Microsoft Kinect that acts as a yoga instructor, teaches six yoga poses, and has customized auditory-only feedback based on skeletal tracking. We ran a controlled study with 16 people who are blind or low vision to evaluate the feasibility and feedback of Eyes-Free Yoga. We found participants enjoyed the game, and the extra auditory feedback helped their understanding of each pose. The findings of this work have implications for improving auditory-only feedback and on the design of exergames using depth cameras."
  • Exploring the use of speech input by blind people on mobile devices
    "We conducted a survey with 169 blind and sighted participants to investigate how often, what for, and why blind people used speech for input on their mobile devices. We found that blind people used speech more often and input longer messages than sighted people. We then conducted a study with 8 blind people to observe how they used speech input on an iPod compared with the on-screen keyboard with VoiceOver. We found that speech was nearly 5 times as fast as the keyboard. While participants were mostly satisfied with speech input, editing recognition errors was frustrating. Participants spent an average of 80.3% of their time editing. Finally, we propose challenges for future work, including more efficient eyes-free editing and better error detection methods for reviewing text."
  • Do you see what I see?: designing a sensory substitution device to access non-verbal modes of communication
    " In this paper, we describe the design and development of a robust and real-time SSD called iFEPS -- improved Facial Expression Perception through Sound. The implementation of the iFEPS evolved over time through a participatory design process. We conducted both subjective and objective experiments to quantify the usability of the system. Evaluation with 14 subjects (7 blind + 7 blind-folded) shows that the users were able to perceive the facial expressions in most of the time. In addition, the overall subjective usability of the system was found to be scoring 4.02 in a 5 point Likert scale."
  • Comparing native signers' perception of American Sign Language animations and videos via eye tracking
    "This study quantifies how the eye gaze of native signers varies when they view: videos of a human ASL signer or synthesized animations of ASL (of different levels of quality). We found that, when viewing videos, signers spend more time looking at the face and less frequently move their gaze between the face and body of the signer. We also found correlations between these two eye-tracking metrics and participants' responses to subjective evaluations of animation-quality. This paper provides methodological guidance for how to design user studies evaluating sign language animations that include eye tracking, and it suggests how certain eye-tracking metrics could be used as an alternative or complimentary form of measurement in evaluation studies of sign language animation."
  • Bypassing lists: accelerating screen-reader fact-finding with guided tours
    "This paper investigates how blind users who navigate the web with screen-readers can bypass a scentless index with guided tours: a much simpler browsing pattern that linearly concatenates items of a collection. In a controlled study (N=11) at the Indiana School for the Blind and Visually Impaired (ISBVI), guided tours lowered user's cognitive effort and significantly decreased time-on-task and number of pages visited when compared to an index with poor information scent. Our findings suggest that designers can supplement indexes with guided tours to benefit screen-reader users in a variety of web navigation contexts."
  • Audio-visual speech understanding in simulated telephony applications by individuals with hearing loss
    "We present a study into the effects of the addition of a video channel, video frame rate, and audio-video synchrony, on the ability of people with hearing loss to understand spoken language during video telephone conversations. Analysis indicates that higher frame rates result in a significant improvement in speech understanding, even when audio and video are not perfectly synchronized. At lower frame rates, audio-video synchrony is critical: if the audio is perceived 100 ms ahead of video, understanding drops significantly; if on the other hand the audio is perceived 100 ms behind video, understanding does not degrade versus perfect audio-video synchrony. These findings are validated in extensive statistical analysis over two within-subjects experiments with 24 and 22 participants, respectively."
  • Architecture of an automated therapy tool for childhood apraxia of speech
    "We present a multi-tier system for the remote administration of speech therapy to children with apraxia of speech. The system uses a client-server architecture model and facilitates task-oriented remote therapeutic training in both in-home and clinical settings. Namely, the system allows a speech therapist to remotely assign speech production exercises to each child through a web interface, and the child to practice these exercises on a mobile device. The mobile app records the child's utterances and streams them to a back-end server for automated scoring by a speech-analysis engine. The therapist can then review the individual recordings and the automated scores through a web interface, provide feedback to the child, and adapt the training program as needed. We validated the system through a pilot study with children diagnosed with apraxia of speech, and their parents and speech therapists. Here we describe the overall client-server architecture, middleware tools used to build the system, the speech-analysis tools for automatic scoring of recorded utterances, and results from the pilot study. Our results support the feasibility of the system as a complement to traditional face-to-face therapy through the use of mobile tools and automated speech analysis algorithms."
  • AphasiaWeb: a social network for individuals with aphasia
    "We have developed AphasiaWeb, a social network designed exclusively for keeping individuals with aphasia and their friends and families connected. In this paper we describe the social network and share findings from a two-month trial program conducted with a local aphasia support group."
  • An empirical study of issues and barriers to mainstream video game accessibility
    "This paper presents the findings of a pair of complementary empirical studies intended to understand the current state of game accessibility in a grounded, real-world context and identify issues and barriers. The first study involved an online survey of 55 gamers with disabilities to elicit information about their play habits, experiences, and accessibility issues. The second study consisted of a series of semi-structured interviews with individuals from the game industry to better understand accessibility's situation in their design and development processes. Through quantitative and qualitative thematic analysis, we derive high-level insights from the data, such as the prevalence of assistive technology incompatibility and the value of middleware for implementing accessibility standardization. Finally, we discuss specific implications and how these insights can be used to define future work which may help to narrow the gap."
  • A web-based intelligibility evaluation of sign language video transmitted at low frame rates and bitrates
    "In an effort to understand how much sign language video quality can be sacrificed, we evaluated the perceived lower limits of intelligible sign language video transmitted at four low frame rates (1, 5, 10, and 15 frames per second [fps]) and four low fixed bitrates (15, 30, 60, and 120 kilobits per second [kbps]). We discovered an "intelligibility ceiling effect," where increasing the frame rate above 10 fps decreased perceived intelligibility, and increasing the bitrate above 60 kbps produced diminishing returns. Additional findings suggest that relaxing the recommended international video transmission rate, 25 fps at 100 kbps or higher, would still provide intelligible content while considering network resources and bandwidth consumption. As part of this work, we developed the Human Signal Intelligibility Model, a new conceptual model useful for informing evaluations of video intelligibility."
  • A haptic ATM interface to assist visually impaired users
    "This paper outlines the design and evaluation of a haptic interface intended to convey non audio-visual directions to an ATM (Automated Teller Machine) user. The haptic user interface is incorporated into an ATM test apparatus on the keypad. The system adopts a well known 'clock face' metaphor and is designed to provide haptic prompts to the user in the form of directions to the current active device, e.g. card reader or cash dispenser. Results of an evaluation of the device are reported that indicate that users with varying levels of visual impairment are able to appropriately detect, distinguish and act on the prompts given to them by the haptic keypad. As well as reporting on how participants performed in the evaluation we also report the results of a semi structured interview designed to find out how acceptable participants found the technology for use on a cash machine. As a further contribution the paper also presents observations on how participants place their hands on the haptic device and compare this with their performance."
  • Using iPods® and iPads® in teaching programs for individuals with developmental disabilities: A systematic review
    "We conducted a systematic review of studies that involved iPods®, iPads®, and related devices (e.g., iPhones®) in teaching programs for individuals with developmental disabilities. The search yielded 15 studies covering five domains: (a) academic, (b) communication, (c) employment, (d) leisure, and (e) transitioning across school settings. The 15 studies reported outcomes for 47 participants, who ranged from 4 to 27 years of age and had a diagnosis of autism spectrum disorder (ASD) and/or intellectual disability. Most studies involved the use of iPods® or iPads® and aimed to either (a) deliver instructional prompts via the iPod Touch® or iPad®, or (b) teach the person to operate an iPod Touch® or iPad® to access preferred stimuli. The latter also included operating an iPod Touch® or an iPad® as a speech-generating device (SGD) to request preferred stimuli. The results of these 15 studies were largely positive, suggesting that iPods®, iPod Touch®, iPads®, and related devices are viable technological aids for individuals with developmental disabilities."
  • Recommending assistive technology (AT) for children with multiple disabilities: A systematic review and qualitative synthesis of models and instruments for AT professionals
    "PURPOSE: To review the AT specific assessment models and instruments that have been developed for children with multiple disabilities in order to provide an overview of the strategies to be employed in interdisciplinary rehabilitation. METHOD: A systematic review was conducted utilizing the MEDLINE, CINAHL, PsycINFO, ERIC and ISI databases covering the period January 1990–September 2011. In addition, 4 conference proceedings, 35 journals and various web resources were hand searched. Papers were reviewed in three steps by three independent investigators according to specific inclusion and exclusion criteria. RESULTS: The search resulted in the finding of 25 papers. Only one model for structuring the AT assessment process and four instruments developed to support decisions about AT solutions for children with multiple disabilities were found. The validity and reliability of the models and instruments found are not documented in the literature reviewed. CONCLUSIONS: We argue that there is a need to develop validated models and instruments to guide AT professionals in the process of AT assessment for children with multiple disabilities."
  • The potential for technology to enhance independence for those aging with a disability
    "Technologies of all kinds can sustain and accelerate improvements in health and quality of life for an aging population, and enhance the independence of persons with disabilities. Assistive technologies are widely used to promote independent functioning, but the aging of users and their devices produces unique challenges to individuals, their families, and the health care system. The emergence of new “smart” technologies that integrate information technology with assistive technologies has opened a portal to the development of increasingly powerful, individualized tools to assist individuals with disabilities to meet their needs. Yet, issues of access and usability remain to be solved for their usefulness to be fully realized. New cohorts aging with disabilities will have more resources and more experience with integrated technologies than current elders. Attention to technological solutions that help them adapt to the challenges of later life is needed to improve quality of life for those living long lives with disabilities."

Note: If no resources are displayed above, please refresh this page.

Visit The Clear Helper Blog: Developing best practices of Web accessibility for people with intellectual / cognitive disabilities.