TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile Graphics for Individuals with Vision Impairment

Teaser

Our image-to-image translation pipeline.

Abstract

Tactile graphics are essential for providing access to visual information for the 43 million people globally living with vision loss. Traditional methods for creating these graphics are labor-intensive and cannot meet growing demand. We introduce TactileNet, the first comprehensive dataset and AI-driven framework for generating embossing-ready 2D tactile templates using text-to-image Stable Diffusion (SD) models. By integrating Low-Rank Adaptation (LoRA) and Dream-Booth, our method fine-tunes SD models to produce high-fidelity, guideline-compliant graphics while reducing computational costs. Quantitative evaluations with tactile experts show 92.86% adherence to accessibility standards and near-human design fidelity (SSIM = 0.538 vs. expert benchmarks). Notably, our model preserves object silhouettes better than human designs (SSIM = 0.259 vs. 0.215 for binary masks), addressing a key limitation of manual tactile abstraction. The framework scales to 32,000 images (7,050 high-quality) across 66 classes, with prompt editing enabling customizable outputs (e.g., adding or removing details). By automating the 2D template generation step—compatible with standard embossing workflows—TactileNet accelerates production while preserving design flexibility. This work demonstrates how AI can augment (not replace) human expertise to bridge the accessibility gap in education and beyond. Code, data, and models will be publicly released to foster further research.

Evaluation Protocol

  • 132 tactile images (66 generated, 66 sourced)
  • 66 natural reference/benchmark images
  • Human expert evaluated using pose accuracy, guideline adherence, and quality rating
  • Structural Similarity Index (SSIM) computed across multiple modalities

Results

Quality Ratings for Generated vs. Sourced Tactile Graphics

Category Generated (%) Sourced (%)
Accepted As Is 32.14 35.71
Accept with Minor Edits 39.23 39.23
Accept with Major Edits 28.57 21.43
Reject (Useless) 00.00 3.57

SSIM Comparison Across Modalities

Comparission SSIM Interpretation
G vs. T 0.538 Model matches human tactile design fidelity
T vs. N 0.549 Human tactile abstraction baseline
G vs. N (bin) 0.259 Model preserves silhouettes
T vs. N (bin) 0.215 Human silhouette abstraction baseline

BibTex

            
@misc{khan2025tactilenetbridgingaccessibilitygap,
  title={TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile Graphics for Individuals with Vision Impairment},
  author={Adnan Khan and Alireza Choubineh and Mai A. Shaaban and Abbas Akkasi and Majid Komeili},
  year={2025},
  eprint={2504.04722},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2504.04722},
}
            
          

Acknowledgements

This work was supported in part by MITACS and the Digital Alliance of Canada. We thank the dedicated student volunteers at the Intelligent Machines Lab, Carleton University for their help with dataset curation and image matching.