For our purposes, blindness is defined as no or very low vision – such that text cannot be read at any magnification.

If you consider we are missing an important resource please contribute information or contact us so we can add it.

Here you will find a number of other interesting summative documents that do not deal with the DeveloperSpace topic (access to ICT), but provide a more spherical view of other aspects of disability.

Turn Right: Analysis of Rotation Errors in Turn-by-Turn Navigation for Individuals with Visual Impairments

D. Ahmetovic, U. Oh, S. Mascetti, C. Asakawa

This work studies rotation errors and their effect on turn-by-turn guidance for individuals with visual impairments to inform the design of navigation assistance in real-world scenarios. A dataset of indoor trajectories of 11 blind participants using NavCog, a turn-by-turn smartphone navigation assistant, was collected.

Easy Return: An App for Indoor Backtracking Assistance

G. Flores, R. Muchi

A mobile app that lets people with visual impairments backtrack their steps for indoor navigation. Participants would walk in a simulated indoor environment of varying lengths, while wearing different intertial sensors and apple watch.
Indoor NavigationBacktracking Assistance

WebinSitu: a comparative analysis of blind and sighted browsing behavior

Jeffrey P. Bigham, Anna C. Cavender, Jeremy T. Brudvik, Jacob O. Wobbrock, Richard E. Ladner

This work analyzes and compares browsing behaviours between blind and sighted participants. An HTTP proxy connection monitored and collected all the user interactions with webpages for a week.


J. Sosa-Garcia, F. Odone

This work aims to create a system for identifying objects for the blind through visual feedback. Images from 7 different object categories were taken by wearable cameras and annotated manually.
Visual impairment


G. Flores, R. Muchi

This study aims to understand how blind people navigate indoors using smartphones. Blind people wore inertial sensors and accelerometers in their phones.

VizWiz dataset

D. Gurari, Q. Li, Abigale J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, Jeffrey P. Bigham

This work highlights the technological needs of blind people and attract more researchers towards accessibility research. Visual question answering (VQA) datasets consist of images taken by blind people and labeled using crowdsourcing.

ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition

D. Massiceti, L. Zintgraf, J. Bronskill, L. Theodorou, M. Harris, E. Cutrell, C. Morrison, K. Hofmann, S. Stumpf

This dataset is a collection of videos of objects recorded by people who are blind/low-vision on their mobile phones to drive research in Teachable Object Recognisers (TORs) under few-shot, high-variation conditions. Collectors recorded and submitted videos to the ORBIT benchmark dataset via an accessible iOS app.
blindnessblindLow vision

Hands Holding Clues for Object Recognition in Teachable Machines

K. Lee, H. Kacorri

A dataset of hand held objects is collected to build an object recognizer that could be used by people with visual impairment. The dataset has photographs by people with and without visual impairments. A sighted and a blind individual collected images of objects using a smartphone camera.
blindnessLow visionVisual impairment


D. Gurari, Q. Li, C. Lin, Y. Zhao, A. Guo, A. Stangl, Jeffrey P. Bigham

Images from the original VizWiz dataset are tagged for presence of private information, and regions that have that information are labelled and masked. This dataset would help train algorithms that would prevent disclosure of private information provided accidentantly or otherwise by blind people. Images taken by blind people, labeled and masked using crowdsourcing.