Inspiration
We were inspired from our work at the University of South Florida's Advanced Visualization Center (AVC) and Center for Assistive, Rehabilitation & Robotics Technologies (CARRT). There, We saw firsthand how powerful new technologies could be and were driven to use these tools to support underserved individuals in effectively expressing themselves.
What it does
Socialight is an innovative assistant designed to empower autistic individuals by helping them achieve communication without boundaries. Leveraging context gathered from an AR device, Socialight utilizes machine learning and Gemini AI to intelligently suggest relevant conversation starters and continuers. This technology aims to foster more engaging interactions, potentially changing who gets to 'shine' in social settings. Moving beyond the idea of the traditional 'socialite', Socialight focuses instead on cultivating genuine social light, opening up new opportunities for authentic connection and confident social participation for its users.
How we built it
We began by planned out our project architecture at a high level, abstract concepts of ideas, pictures, grouping thoughts, and expanding on the needs. Afterwards we split the tasks into sub sections and tried to merge similar or identical responsibilities. This has left us with three main goals: UI, Image/audio processing, and AI processing. With our roles and tasks clear we began our hacking. Things were mostly going smoothly but once we passed the 24 hour mark we were faced with a dilemma about using the Snap Spectacles and unfortunately we ultimately decided to fall back to the Meta Quest 3 and continue our project from there resulting in our final submission.
Challenges we ran into
A major challenge we ran into was the limitation of Snap Spectacles. Our Idea was original built and designed around the Snap Spectacles AR glasses. We planned on taking full advantage of what a few might call a flaw, overlays in Spectacles which the virtual image is slightly opaque, allows us to see through the virtual images. However, since the beginning we found a limitation of Spectacles, which was listening. By default, Spectacles has a built in and untoggleable feature called Bystander Speech Rejection which ignores speech not coming from the Spectacles wearer. It was a crucial part of our app being able to listen to others and respond to it, so rather than giving up we decided to push forward. What we needed was a stream of audio data without holding a virtual button to record, thus we concluded in designing the audio with the Bystander Speech Rejection data and attempt to convert it to usable information in batches for Unity. Nearing the 24 hour mark we beginning to see the work around of our implementation nearing it's end, of not working due to the limited quality of external voices and decided to pivot to the Meta Quest 3 to develop the app in our remaining time.
Accomplishments that we're proud of
- We were proud of our code architecture and planning prior to implementation, in which we designed a modular approach which allowed us to pivot while keeping the same App concept.
- Implementation of a functional App which we can compete with despite starting on it 24 hours late
What we learned
- Effective program architecture, the things outside of coding and how adaptive good planning can be
- We learned how to use Snap Lens Studio to develop apps for Snapchat and Spectacles
What's next for Socialight
Completing the app by:
- Improving design
- Refining questioning models
- Adding more information gathering to better improve accuracy
- Possibly presenting to CARRT or other Education research to continue Socialight in a larger environment
Built With
- deepface
- express.js
- flask
- gemni
- metaxr
- node.js
- unity
Log in or sign up for Devpost to join the conversation.