Inspiration

Having seen the success of agentic AI in other fields such as software development, we were inspired to find an opportunity to leverage the power of artificial intelligence to help everyday people. As young people just entering the financial world, we found that AI could be used to help young and/or inexperienced people make better financial decisions.

What it does

pecunia offers a fully-fledged, multilingual, personalized financial assistant for everyone. Harnessing Whisper, LLama 3, DeepL, ElevenLabs and BERT, we utilize the power of Artificial Intelligence in order to guide users in their pursuit of financial wellbeing. pecunia currently offers one agent, OsakAI, inspired by Osaka from Azumanga Daioh. As a multilingual speech to speech agent, OsakAI is able to provide an intimate connection to users of all different backgrounds.

How we built it

We built pecunia using Node.js, Express and Python. We use python in order to perform all the machine learning work, using the HuggingFace, Whisper, Meta-Ai-API and DeepL libraries. We use Whisper to detect the language spoken by the user and to transcribe speech to text. Depending on language, the test is then translated to English via DeepL before being sent to Llama 3 70b, the newest SOTA model from Meta. We utilize a custom prompt and personalized dataset to gain the best possible performance. Then, we translate the output from Llama 3 back to the original language, before sending the text to Elevenlabs to be converted to spoken word.

The UI of pecunia was done in Node.js, with the explicit purpose of nostalgia. We utilize small design elements such as shadowed text to bring users back to the 2000s, around when Azumanga Daioh was released. The UI is also kept extremely minimalistic, with the focus being on the direct spoken communication between user and Llama 3.

Challenges we ran into

We ran into a ton of challenges throughout the process of building pecunia. In the beginning, most of our challenges were related to our model pipeline, with optimization of our prompting strategy and our translation pipeline being key to the success of pecunia. However, there was some inevitable latency that we were unable to avoid. We then ran into numerous challenges when it came to optimizing the user experience of pecunia. Chiefly, we aimed to make the user experience as light-hearted as possible, bringing some joy to what is generally an overtly serious topic.

Accomplishments that we're proud of

We're really proud of our interchangeable pipeline, giving us the ability to easily swap in and out models, especially with the focus on using open-source models such as Llama-3. We're also proud of our website design, especially given that we had basically no experience in website design and hand-coded much of site in CSS.

What we learned

We learned a lot about front and back-end design, generally. We learned about the specifics of full-stack design, understanding more about how different parts of the stack interact with each other. We also learned a lot about being a team; we had a unified vision from the beginning and tried to play to each others strengths, no matter whether they be in python or in learning javascript on the fly.

What's next for pecunia

Our current plans include a lot of optimization of our pipeline. We currently offload a lot of our computation to API calls; we are considering making an option for users to opt into using local models, which would speed up computation at a slight disadvantage in performance. We would also like to host our own models; given an increase in the computational power at our disposal, we would be able to host the open-source LLama 3 models that we've chosen for our agent. This would help us keep information even more private than we're able to now.

Built With

Share this project:

Updates