One afternoon at the library, we encountered a young boy who was having difficulty reading a sign. Upon talking to him, we discovered that he had forgotten his glasses at home and was visually impaired. This encounter inspired us to develop an innovative app called Zardon Vision that leverages the camera of mobile devices to aid the visually impaired.

What it does

Some features include

Image description(Label what is going on in images) Text OCR(Take a photo and speak aloud Text) Built in assistant(custom siri-accessible chatbot based of GPT-3)

How we built it

Frontend: Combination of SwiftUI and UIkit.

Object Detection: Python, Flask, and PyTorch

Text recognition: Vision Kit and AVFoundation (Proprietary Apple Frameworks) .

GPT Shortcut: Apple shortcut programming

Challenges we ran into

SwiftUI it kind of new so we had to use UIkit for alot of things including the camera aspects

Flask server wasn't consistent in connecting

Design process was hard to do(designing for visually impaired was challenging)

First time using Shortcuts and for something as hard as sending a api request and reading it back took awhile.

Accomplishments that we're proud of

First SwiftUI app

Coding in Shortcuts

First hackathon for most of our members

What we learned

UIKit > SwiftUI (In terms of reliability and features)

How to use the Transformers Hugging Face library

Shortcuts programming

What's next for Zardon Vision

Contacting Company to sponsor Publishing on app store

Built With

Share this project: