Inspiration# SAEL — Seamless Access for Everyday Life

🧩 What This Project Is

SAEL is a mobile-first web application that empowers blind and vision-impaired individuals to independently complete physical forms.

While most accessibility solutions focus on digital interfaces, SAEL bridges the gap between the physical and digital worlds by transforming paper-based forms into a guided, voice-driven experience.

The platform works in three core stages:

  1. Scan — The user captures a document using their phone camera with real-time audio and haptic guidance
  2. Understand & Fill — The system extracts fields, explains them in plain language, and collects responses via voice
  3. Export & Share — The completed form is reconstructed and exported as a structured PDF

At its core, SAEL turns a static form into a conversation, enabling users to not just fill out paperwork—but understand it.


💡 What Inspired Us

We realized that despite the rapid digitization of services, physical forms still exist everywhere—especially in critical environments like hospitals, government offices, and legal institutions.

For blind users, these forms present a major barrier:

  • They often require assistance from others
  • They involve complex and technical language
  • They create a loss of privacy and independence

We were particularly inspired by the idea that:

Accessibility is not just about access, but about autonomy and dignity

This led us to ask:

[ \text{How can we transform a physical form into an experience that is fully accessible, understandable, and independent?} ]


🛠️ How We Built It

SAEL combines computer vision, voice interaction, and intelligent guidance into a unified system.

1. Document Scanning

  • Real-time document detection using on-device computer vision
  • Audio + haptic feedback to guide framing and alignment
  • Confidence-based capture system

We model scan confidence as a function:

[ C = f(a, s, e, l) ]

Where:

  • (a) = alignment
  • (s) = sharpness
  • (e) = edge detection completeness
  • (l) = lighting quality

The system captures the document when (C \rightarrow 1).


2. Form Extraction & Understanding

  • OCR processes the scanned document
  • Azure Document Intelligence extracts structured fields and coordinates
  • Each field is converted into a voice prompt

Example transformation:

[ \text{“SSN”} \rightarrow \text{“Please provide your Social Security Number”} ]

An AI assistant:

  • Simplifies complex language
  • Guides the user step-by-step
  • Explains why information is being requested

Voice input is handled via speech-to-text, allowing seamless interaction.


3. Form Reconstruction

  • User responses are mapped back to their original coordinates
  • A completed digital version is generated
  • Output is exported as a structured PDF

4. Privacy & Security

We prioritized user privacy by:

  • Processing sensitive data locally when possible
  • Obfuscating personally identifiable information (PII) before external calls
  • Encrypting data during transfer

Conceptually:

[ \text{Secure Data} = \text{Encrypt}(\text{Obfuscate}(\text{Input})) ]


📚 What We Learned

1. Accessibility is more than compliance

We learned that simply making something “usable” is not enough. True accessibility means:

  • Reducing cognitive load
  • Providing guidance, not just access
  • Designing for real-world conditions, not ideal ones

2. Audio and haptics are powerful interfaces

Visual UI is not the only (or even the best) interface. Continuous feedback systems—like tone and vibration—can be more intuitive than traditional instructions.


3. Simplicity is hard

Breaking down complex forms into understandable steps required:

  • Language simplification
  • Context awareness
  • Thoughtful interaction design

4. Trust is critical

Users need to feel confident that:

  • Their data is secure
  • The system is accurate
  • They are in control

⚠️ Challenges We Faced

1. Document Scanning Without Vision

Helping a blind user correctly frame a document is non-trivial. Traditional visual overlays don’t work, so we had to rethink the experience using:

  • Spatial audio
  • Haptic feedback
  • Confidence-based cues

2. Handling Complex and Ambiguous Forms

Forms are not standardized. We encountered:

  • Inconsistent layouts
  • Poor scan quality
  • Ambiguous field labels

3. Balancing Automation vs Control

Auto-capture and AI assistance are helpful, but too much automation can reduce trust.

We had to strike a balance:

[ \text{Good UX} = \text{Automation} + \text{User Control} ]


4. Privacy Concerns with AI

Using AI introduces risks when handling sensitive information (e.g., SSNs, medical data). We addressed this with:

  • Local processing
  • Data obfuscation
  • Secure pipelines

5. Mobile-First Design Constraints

Since scanning happens on mobile:

  • UI must be simple and uncluttered
  • Interactions must be fast and responsive
  • Feedback must be immediate and clear

🚀 Closing Thought

SAEL is not just a tool—it’s a step toward a world where everyone can interact with essential systems independently.

Because access isn’t enough—people deserve understanding, control, and dignity.

What it does

How we built it

Challenges we ran into

Accomplishments that we're proud of

What we learned

What's next for SAEL

Built With

Share this project:

Updates