Creating Accessible Online Floor Plans for Visually Impaired Readers


Journal article


Anuradha Madugalla, K. Marriott, S. Marinai, Samuele Capobianco, Cagatay Goncu
ACM Transactions on Accessible Computing, 2020

Semantic Scholar DBLP DOI
Cite

Cite

APA   Click to copy
Madugalla, A., Marriott, K., Marinai, S., Capobianco, S., & Goncu, C. (2020). Creating Accessible Online Floor Plans for Visually Impaired Readers. ACM Transactions on Accessible Computing.


Chicago/Turabian   Click to copy
Madugalla, Anuradha, K. Marriott, S. Marinai, Samuele Capobianco, and Cagatay Goncu. “Creating Accessible Online Floor Plans for Visually Impaired Readers.” ACM Transactions on Accessible Computing (2020).


MLA   Click to copy
Madugalla, Anuradha, et al. “Creating Accessible Online Floor Plans for Visually Impaired Readers.” ACM Transactions on Accessible Computing, 2020.


BibTeX   Click to copy

@article{anuradha2020a,
  title = {Creating Accessible Online Floor Plans for Visually Impaired Readers},
  year = {2020},
  journal = {ACM Transactions on Accessible Computing},
  author = {Madugalla, Anuradha and Marriott, K. and Marinai, S. and Capobianco, Samuele and Goncu, Cagatay}
}

Abstract

We present a generic model for providing blind and severely vision-impaired readers with access to online information graphics. The model supports fully and semi-automatic transcription and allows the reader a choice of presentation mediums. We evaluate the model through a case study: online house floor plans. To do so, we conducted a formative user study with severely vision impaired users to determine what information they would like from an online floor plan and how to present the floor plan as a text-only description, tactile graphic, and on a touchscreen with audio feedback. We then built an automatic transcription tool using specialized graphics recognition algorithms. Finally, we measured the quality of system recognition as well as conducted a second user study to evaluate the usefulness of the accessible graphics produced by the tool for each of the three formats. The results generally support the design of the generic model and the usefulness of the tool we have produced. However, they also reveal the inability of current graphics recognition algorithms to handle unforeseen graphical conventions. This highlights the need for automatic transcription systems to return a level of confidence in the recognized components and to present this to the end-user so they can have an appropriate level of trust.


Share


Follow this website


You need to create an Owlstown account to follow this website.


Sign up

Already an Owlstown member?

Log in