Inspiration
The prospect of aging is often dreaded due to the high likelihood of cognitive decline in many aging individuals. Unfortunately, by the time symptoms become noticeable, significant cognitive deterioration may have already occurred, often limiting the effectiveness of interventions. However, if detected early, cognitive impairment is not only manageable, but in some cases, it can even be reversible. Recognizing the importance of early detection, we set out to develop Cogniverse, a tool that uses cutting-edge voice analysis to monitor cognitive health over time. By analyzing the voice biomarker MFCC, Cogniverse tracks subtle changes in speech patterns that could signal the onset of cognitive decline.
Through daily, non-invasive monitoring, Cogniverse allows healthcare providers, caregivers, and users themselves to detect early signs of cognitive impairment, ensuring that individuals receive the necessary interventions before the condition progresses too far. Our mission is to empower both individuals and their healthcare teams with real-time insights, providing a tool for proactive health management that can help preserve quality of life during the aging process. By leveraging AI and machine learning to analyze speech data, we believe that Cogniverse can change the future of aging care, making early diagnosis and intervention more accessible and effective than ever before.
What it does
- By talking to Cogniverse for 2 minutes a day, Cogniverse provides insight into the extent of cognitive decline.
- Cogniverse listens to voice recordings and analyzes them for symptoms of cognitive decline. It looks for a biomarker called Mel-Frequency Cepstral Coefficients (MFCCs), which indicates cognitive decline. Based on the severity of the factors mentioned above, it states how far the cognitive decline has progressed. It is a method of early detection by showing the risk of developing a degenerative cognitive disease.
- Cogniverse looks for subtle patterns of deterioration of speech over time, like slurring. It can give a sense of the progression of cognitive decline in patients with Alzheimer's, Parkinson's, etc., or warn of the onset of symptoms.
How we built it
Back-end:
- We processed the PDFs by extracting the text elements from the PDF documents in the Media Set. We are extracting the raw files into files that are similar to interact with.
- We extracted the chunks by splitting the strings into similar lengths and it's useful to create small text blocks without removing too much important information. By filtering the chunks we can find the information in our data related to a specific topic.
- We created the chunk IDs to reference the chunk objects in the Ontology we created.
- We used the LLM by using deep learning to summarize the 17 research articles we input into the Palantir Platform
- There are three graphs, one is the original data, the second is the first derivative of the original data, and finally, the third graph is the second derivative of the original data
- We derive the graphs multiple times to clarify the data
How we Analyzed Mel-Frequency Cepstral Coefficients:
- Used Jupyter through Palantir
- We were not able to successfully create a model adapter to convert the code from Palantir Jupyter to our Palantir Pipeline (To synthesize data and completely train the model)
- We utilized MFCC because it captures spectral features of voice (timbre, phonetics) & MFCCs changes can indicate cognitive issues
- We created graphs of the MFCC and also turned the graphs’ data into numerical data
Our Simulation:
- Analyzed audio from Joe Biden’s speeches from the years of 2018, 2020, 2022, and 2024 to analyze him for cognitive decline
- Collected videos from the internet, downloaded, cut to ~2 min, and fed the audio into the Jupyter Journal
- Cross-referenced with a control MFCC graph of an individual with dementia (In video)
Front-end:
- Built the iOS app frontend using SwiftUI, starting with a clean and user-friendly interface design.
- Main screen includes three core buttons:
- Microphone button (center): Starts and stops the two-minute voice recording.
- “Current Analysis": Displays the latest cognitive risk assessment.
- "Trends": Intended to show score progression over time.
- Integrated AVAudioRecorder (or a placeholder) to manage audio input.
- Upon stopping the recording, the app simulates MFCC feature extraction by generating and converting graphical data into numerical format.
- Audio is sent to the Palantir-hosted backend API, which analyzes the MFCCs and returns a cognitive risk level.
- Results are dynamically displayed using @State variables in SwiftUI, completing the loop from voice capture to visual cognitive health feedback.
Challenges we ran into
- Our next step was to create a model adapter to add this code to our Pipeline- thus synthesizing the inputs (info from research & audio examples) needed to finish training our model and ran into major issues - Adapting the data
- We followed the steps located on the Palantir Documentation Page
- We asked for help from Palantir engineers but they didn’t know how to create the model adapter
- We worked on our errors related to adapting the data for more than 3 hours but weren’t able to figure it out
Accomplishments that we're proud of
- We are glad that we could generate graphs from our audio inputs
- Our frontend and backend work separately but not together
- We are new to coding so we were happy were were able to create error-free code
What we learned
- We learned how to learn/navigate Palantir
- learned how to work under a time crunch
- learned how to use SwiftUI
What's next for Cogniverse
After presenting to the judges at Las Altos IX, we aim to transform this into a real-world solution for continuous cognitive health monitoring. Because we ran out of time, we did not get the chance to integrate our front end and our back end, something we will plan to do. With further development it could be integrated into healthcare platforms like the Kaiser Permanente app or My Chart, allowing physicians to passively monitor their patients' cognitive trends and intervene early when symptoms appear. On the B2B2C front, the app could partner with insurance providers for risk assessment, integrate into senior living facilities and home health agencies to monitor aging populations, and be adapted into workplace wellness programs for early detection of cognitive strain or burnout.



Log in or sign up for Devpost to join the conversation.