Cognitive Web Accessibility: Assistive Technology 2011

Published in 2011, these resources are original studies, literature reviews and related articles that cite references.

  • On the intelligibility of fast synthesized speech for individuals with early-onset blindness
    "In this paper we report the results of a pilot experiment on the intelligibility of fast synthesized speech for individuals with early-onset blindness. Using an open-response recall task, we collected data on four synthesis systems representing two major approaches to text-to-speech synthesis: formant-based synthesis and concatenative unit selection synthesis. We found a significant effect of speaking rate on intelligibility of synthesized speech, and a trend towards significance for synthesizer type. In post-hoc analyses, we found that participant-related factors, including age and familiarity with a synthesizer and voice, also affect intelligibility of fast synthesized speech."
  • Supporting blind photography
    "In this paper, we present the results of a large survey that shows how blind people are currently using cameras. Next, we introduce EasySnap, an application that provides audio feedback to help blind people take pictures of objects and people and show that blind photographers take better photographs with this feedback. We then discuss how we iterated on the portrait functionality to create a new application called PortraitFramer designed specifically for this function. Finally, we present the results of an in-depth study with 15 blind and low-vision participants, showing that they could pick up how to successfully use the application very quickly."
  • Improving calibration time and accuracy for situation-specific models of color differentiation
    "Color vision deficiencies (CVDs) cause problems in situations where people need to differentiate the colors used in digital displays. Recoloring tools exist to reduce the problem, but these tools need a model of the user's color-differentiation ability in order to work. Situation-specific models are a recent approach that accounts for all of the factors affecting a person's CVD (including genetic, acquired, and environmental causes) by using calibration data to form the model. This approach works well, but requires repeated calibration - and the best available calibration procedure takes more than 30 minutes. To address this limitation, we have developed a new situation-specific model of human color differentiation (called ICD-2) that needs far fewer calibration trials. The new model uses a color space that better matches human color vision compared to the RGB space of the old model, and can therefore extract more meaning from each calibration test. In an empirical comparison, we found that ICD-2 is 24 times faster than the old approach, and had small but significant gains in accuracy. The efficiency of ICD-2 makes it feasible for situation-specific models of individual color differentiation to be used in the real world."
  • Automatically generating tailored accessible user interfaces for ubiquitous services
    "Ambient Assisted Living environments provide support to people with disabilities and elderly people, usually at home. This concept can be extended to public spaces, where ubiquitous accessible services allow people with disabilities to access intelligent machines such as information kiosks. One of the key issues in achieving full accessibility is the instantaneous generation of an adapted accessible interface suited to the specific user that requests the service. In this paper we present the method used by the EGOKI interface generator to select the most suitable interaction resources and modalities for each user in the automatic creation of the interface. The validation of the interfaces generated for four different types of users is presented and discussed."
  • Blind people and mobile touch-based text-entry: acknowledging the need for different flavors
    "Good spatial ability is still required to have notion of the device and its interface, as well as the need to memorize buttons' position on screen. These abilities, as many other individual attributes as age, age of blindness onset or tactile sensibility are often forgotten, as the blind population is presented with the same methods ignoring capabilities and needs. Herein, we present a study with 13 blind people consisting of a touch screen text-entry task with four different methods. Results show that different capability levels have significant impact on performance and that this impact is related with the different methods' demands. These variances acknowledge the need of accounting for individual characteristics and giving space for difference, towards inclusive design."
  • A mobile phone based personal narrative system
    "Based on user feedback from the previous project "How was School today?" we developed a modular system where school staff can use a mobile phone to track interaction with people and objects and user location at school. The phone also allows taking digital photographs and recording voice message sets by both school staff and parents/carers at home. These sets can be played back by the child for immediate narrative sharing similar to established AAC device interaction using sequential voice recorders. The mobile phone sends all the gathered data to a remote server. The data can then be used for automatic narrative generation on the child's PC based communication aid. Early results from the ongoing evaluation of the application in a special school with two participants and school staff show that staff were able to track interactions, record voice messages and take photographs. Location tracking was less successful, but was supplemented by timetable information. The participating children were able to play back voice messages and show photographs on the mobile phone for interactive narrative sharing using both direct and switch activated playback options."
  • Accessibility of 3D game environments for people with Aphasia: an exploratory study
    "We report a study undertaken to investigate the issues that confront people with aphasia when interacting with technology, specifically 3D game environments. Five people with aphasia were observed and interviewed in twelve workshop sessions. We report the key themes that emerged from the study, such as the importance of direct mappings between users' interactions and actions in a virtual environment. The results of the study provide some insight into the challenges, but also the opportunities, these mainstream technologies offer to people with aphasia. We discuss how these technologies could be more supportive and inclusive for people with language and communication difficulties."
  • Annotation-based video enrichment for blind people: a pilot study on the use of earcons and speech synthesis
    "Our approach to address the question of online video accessibility for people with sensory disabilities is based on video annotations that are rendered as video enrichments during the playing of the video. We present an exploratory work that focuses on video accessibility for blind people with audio enrichments composed of speech synthesis and earcons (i.e. nonverbal audio messages). Our main results are that earcons can be used together with speech synthesis to enhance understanding of videos; that earcons should be accompanied with explanations; and that a potential side effect of earcons is related to video rhythm perception."
  • Evaluating quality and comprehension of real-time sign language video on mobile phones
    "In this survey, we determine how well PSNR matches human comprehension of sign language video. We use very low bitrates (10-60 kbps) and two low spatial resolutions (192×144 and 320×240 pixels) which may be typical of video transmission on mobile phones using 3G networks. In a national online video-based user survey of 103 respondents, we found that respondents preferred the 320×240 spatial resolution transmitted at 20 kbps and higher; this does not match what PSNR results would predict. However, when comparing perceived ease/difficulty of comprehension, we found that responses did correlate well with measured PSNR. This suggests that PSNR may not be suitable for representing subjective video quality, but can be reliable as a measure for comprehensibility of American Sign Language (ASL) video. These findings are applied to our experimental mobile phone application, MobileASL, which enables real-time sign language communication for Deaf users at low bandwidths over the U.S. 3G cellular network."
  • Assessing the deaf user perspective on sign language avatars
    "Signing avatars have the potential to become a useful and even cost-effective method to make written content more accessible for Deaf people. However, avatar research is characterized by the fact that most researchers are not members of the Deaf community, and that Deaf people as potential users have little or no knowledge about avatars. Therefore, we suggest two well-known methods, focus groups and online studies, as a two-way information exchange between research and the Deaf community. Our aim was to assess signing avatar acceptability, shortcomings of current avatars and potential use cases. We conducted two focus group interviews (N=8) and, to quantify important issues, created an accessible online user study(N=317). This paper deals with both the methodology used and the elicited opinions and criticism. While we found a positive baseline response to the idea of signing avatars, we also show that there is a statistically significant increase in positive opinion caused by participating in the studies. We argue that inclusion of Deaf people on many levels will foster acceptance as well as provide important feedback regarding key aspects of avatar technology that need to be improved."
  • Evaluating importance of facial expression in american sign language and pidgin signed english animations
    "To quantify the suggestions of deaf participants in our prior studies, we experimentally evaluated ASL and PSE animations with and without various types of facial expressions, and we found that their inclusion does lead to measurable benefits for the understandability and perceived quality of the animations. This finding provides motivation for our future work on facial expressions in ASL and PSE animations, and it lays a novel methodological groundwork for evaluating the quality of facial expressions for conveying prosodic or grammatical information."
  • We need to communicate!: helping hearing parents of deaf children learn american sign language
    "We are in the process of creating a mobile application to help hearing parents learn ASL. To this end, we have interviewed members of our target population to gain understanding of their motivations and needs when learning sign language. We found that the most common motivation for parents learning ASL is better communication with their children. Parents are most interested in acquiring more fluent sign language skills through learning to read stories to their children."
  • ACES: aphasia emulation, realism, and the turing test
    "This paper provides a validation of ACES' distortions through a Turing Test experiment with participants from the Speech and Hearing Science community. It illustrates that text samples generated with ACES distortions are generally not distinguishable from text samples originating from individuals with aphasia. This paper explores ACES distortions through a `How Human' is it test, in which participants explicitly rate how human- or computer-like distortions appear to be."
  • Humsher: a predictive keyboard operated by humming
    "This paper presents Humsher -- a novel text entry method operated by the non-verbal vocal input, specifically the sound of humming. The method utilizes an adaptive language model for text prediction. Four different user interfaces are presented and compared. Three of them use dynamic layout in which n-grams of characters are presented to the user to choose from according to their probability in given context. The last interface utilizes static layout, in which the characters are displayed alphabetically and a modified binary search algorithm is used for an efficient selection of a character. All interfaces were compared and evaluated in a user study involving 17 able-bodied subjects. Case studies with four disabled people were also performed in order to validate the potential of the method for motor-impaired users. The average speed of the fastest interface was 14 characters per minute, while the fastest user reached 30 characters per minute. Disabled participants were able to type at 14 -- 22 characters per minute after seven sessions."
  • Leveraging large data sets for user requirements analysis
    "In this paper, we show how a large demographic data set that includes only high-level information about health and disability can be used to specify user requirements for people with specific needs and impairments. As a case study, we consider adapting spoken dialogue systems (SDS) to the needs of older adults. Such interfaces are becoming increasingly prevalent in telecare and home care, where they will often be used by older adults...We conclude that while SDS are ideal for solutions that are delivered on the near ubiquitous landlines, they need to be accessible for people with mild to moderate hearing problems, and thus multimodal solutions should be based on the television, a technology even more widespread than landlines."
  • Navigation and obstacle avoidance help (NOAH) for older adults with cognitive impairment: a pilot study
    "n this paper, we describe an intelligent wheelchair that uses computer vision and machine learning methods to provide adaptive navigation assistance to users with cognitive impairment. We demonstrate the performance of the system in a user study with the target population. We show that the collision avoidance module of the system successfully decreases the number of collisions for all participants. We also show that the wayfinding module assists users with memory and vision impairments. We share feedback from the users on various aspects of the intelligent wheelchair system. In addition, we provide our own observations and insights on the target population and their use of intelligent wheelchairs. Finally, we suggest directions for future work."
  • Situation-based indoor wayfinding system for the visually impaired
    "This paper presents an indoor wayfinding system to help the visually impaired finding their way to a given destination in an unfamiliar environment. The main novelty is the use of the user's situation as the basis for designing color codes to explain the environmental information and for developing the wayfinding system to detect and recognize such color codes. Actually, people would require different information according to their situations. Therefore, situation-based color codes are designed, including location-specific codes and guide codes. These color codes are affixed in certain locations to provide information to the visually impaired, and their location and meaning are then recognized using the proposed wayfinding system. Consisting of three steps, the proposed wayfinding system first recognizes the current situation using a vocabulary tree that is built on the shape properties of images taken of various situations. Next, it detects and recognizes the necessary codes according to the current situation, based on color and edge information. Finally, it provides the user with environmental information and their path through an auditory interface. To assess the validity of the proposed wayfinding system, we have conducted field test with four visually impaired, then the results showed that they can find the optimal path in real-time with an accuracy of 95%."
  • Supporting spatial awareness and independent wayfinding for pedestrians with visual impairments
    " In this paper, we examine how mobile location-based computing systems can be used to increase the feeling of independence in travelers with visual impairments. A set of formative interviews with people with visual impairments showed that increasing one's general spatial awareness is the key to greater independence. This insight guided the design of Talking Points 3 (TP3), a mobile location-aware system for people with visual impairments that seeks to increase the legibility of the environment for its users in order to facilitate navigating to desired locations, exploration, serendipitous discovery, and improvisation. We conducted studies with eight legally blind participants in three campus buildings in order to explore how and to what extent TP3 helps promote spatial awareness for its users. The results shed light on how TP3 helped users find destinations in unfamiliar environments, but also allowed them to discover new points of interest, improvise solutions to problems encountered, develop personalized strategies for navigating, and, in general, enjoy a greater sense of independence."
  • Towards a framework to situate assistive technology design in the context of culture
    "We present the findings from a cross-cultural study of the expectations and perceptions of individuals with autism and other intellectual disabilities (AOID) in Kuwait, Pakistan, South Korea, and the United States. Our findings exposed cultural nuances that have implications for the design of assistive technologies. We develop a framework, based on three themes; 1) lifestyle; 2) socio-technical infrastructure; and 3) monetary and informational resources within which the cultural implications and opportunities for assistive technology were explored. The three key contributions of this work are: 1) the development of a framework that outlines how culture impacts perceptions and expectations of individuals with social and intellectual disabilities; 2) a mapping of how this framework leads to implications and opportunities for assistive technology design; 3) the presentation of concrete examples of how these implications impact the design of three emerging assistive technologies."
  • Empowering individuals with do-it-yourself assistive technology
    "This paper illustrates that it is possible to custom-build Assistive Technology, and argues why empowering users to make their own Assistive Technology can improve the adoption process (and subsequently adoption rates). We discuss DIY experiences and impressions from individuals who have either built Assistive Technology before, or rely on it. We found that increased control over design elements, passion, and cost motivated individuals to make their own Assistive Technology instead of buying it. We discuss how a new generation of rapid prototyping tools and online communities can empower more individuals. We synthesize our findings into design recommendations to help promote future DIY-AT success."
  • The design of human-powered access technology
    "In this paper, we frame recent developments in human computation in the historical context of accessibility, and outline a framework for discussing new advances in human-powered access technology. Specifically, we present a set of 13 design principles for human-powered access technology motivated both by historical context and current technological developments. We then demonstrate the utility of these principles by using them to compare several existing human-powered access technologies. The power of identifying the 13 principles is that they will inspire new ways of thinking about human-powered access technologies."
  • Development of an AT selection tool using the ICF model.
    "Assistive Technology (AT) is regularly provided by health and social services to many people with a wide range of needs or disabilities, to overcome barriers and difficulties in daily life. This research aims to develop a tool that may support practitioners to meet clients' individual needs when selecting AT by using a common language and structure for the process. A tool was developed incorporating the International Classification of Functioning, Disability and Health (ICF) model, collating information in a systematic order. Experts in the field of AT were consulted during the development of the tool and initial evaluation, and provided positive feedback on the research aims and approach. This paper describes the development of the tool and the potential added value of using the ICF in AT selection, with links to existing models and instruments. Plans for further development and testing of the tool are outlined."
  • Locating assistive technology within an emancipatory disability research framework
    "Assistive technology (AT) provides an interface between a disabled individual and his or her environment. Historically, AT practice and research has focused on how a device can augment or replace the function of an individual, with less emphasis on how the environment creates disabling conditions resulting in the need to use AT. Researchers have primarily used positivist approaches to study the impact of an AT, although there has been a more recent inclusion of qualitative approaches. Emancipatory disability research, with a focus on empowerment, reciprocity, relevance, and action against societal oppression, has had a minimal uptake in the AT field and yet holds great promise for addressing the environmental aspect of the person-AT-environment interaction. The purpose of this paper is to explore the congruence between AT, the social model of disability, and emancipatory disability research. The aim is to demonstrate that those in the AT field can benefit by adopting emancipatory principles and approaches in conducting research, developing new technologies, and providing services to AT users. Research that addresses individual impairments while addressing the environmental barriers that create disability can co-exist; embracing both views will be essential to the future of AT."

Note: If no resources are displayed above, please refresh this page.

Visit The Clear Helper Blog: Developing best practices of Web accessibility for people with intellectual / cognitive disabilities.