Like most adults today, I’ve spent countless hours researching online. The thing about researching online is, you get drug down countless sales pages, scam sites, and irrelevant information. Perplexity gives us the power to skip those hellish 8 hours, and save ourselves from scams, in a few seconds.
Honestly, I didn’t use Perplexity much before this. Once or twice. I normally use Grok and Gemini, with a few others like Leo here and there. At first, I was kind of confused by the non pro rate limits. I started asking Perplexity tons of questions, and got excellent results.
See, at first my idea was to investigate crypto contract addresses to transaction type validation to use in crypto tax accounting, however, after consulting with Perplexity, we decided that simply wasn’t going to happen in a month, and wouldn’t showcase the sonar api call in a meaningful way. So we got into research.
Like most people who have explored web-3 and crypto, I have indeed been scammed. Multiple times. It sucks, but that’s life in the frontier. Or it was, until now!
Using the sonar-pro model, I was able to pull excellent results from multiple search areas that generally take me hours to find. Even navigating some platforms development pages to find information like Team and Whitepaper can take a fair amount of time and effort. Pro picks them up in less than 15 seconds.
^^ I learned this by running template prompts in new threads on Perplexity
IQ Score
So after Perplexity and I got done spamming our initial 8 prompts, I built them all into subsystems to test the different sonar models, to emulate the results. I found normal sonar-pro, with 0 temp worked best. Sonar didn’t pick up the same information like Whitepaper and Team consistently, and forced a lot of retries. The reasoning models were too much, as the point of these calls isn’t to reason out the information, just get it and report it. Making them all reasoning-pro just didn’t seem like it added more value over sonar-pro here. I did make the value pram a reasoning call, as it assess the asset as a whole.
At first I was using text based prompting, as that’s all I’ve used up to now. Perplexity gave me a solid nested layout like this.
[asset]
[audit1-6]
-[firm]
-[found]
-[details]
[summery]
[sources]
That worked, very well actually. Didn’t fail after 200 tests. But in the office hours, James mentioned it would be better to use a structure .json so I tried that out.
I gave a text based prompt to Perplexity and asked for a .json version, it took like an hour to sort them all.
At this point in time I also moved them to .json files to clear up the .js code, and added more prams.
So I thought this was a cool little tool, but that’s all it was. A tool, which is cool, but idk, it felt lacking for an app you know? Like how do I know what to research? How can I pair assets to unknown networks?
News
Thus the news. Where the IQ system took about 20 hours to sort out, this one took less than 10. It was extremely simple. I tested normal sonar, it didn’t pull many items. Upon retest, it often pulled the same items. Normal pro pulled around 5 or 6, same retest duplication.
I would either need to implement a link-based blacklist into the prompt, which would be a little complicated. Delete or remove duplicate articles. Or pull as much as possible each time, to encompass a news period. I decided to force long responses on pro and it gave around 9 articles, depending on the exact news for the day. The freshness indicators tell us when the latest run of that interest tag was.
There are things I don’t like about the news, like it gets general recaps or best to buy in month, but generally it is okay.
Chain Sage
Chatbots are all the rage right. I just wanted something to quickly lookup network pairs while I was working. Eventually I decided to give it shadow commands, or secret prompts to run semi detailed searches. I like my apps to be a little dramatic, and who does not love in app secrets? Its mostly just a chatbot. Took a few hours to implement. The Shadow prompts took about 10.
Social
So because I didn’t think one tool was cool enough, I made two more tools, but now, I only have some tools. Not even a bag or shed to put them in. But research is just research. At the time I was saving the sonar outputs as a text file and .json, so I could just let people download or copy those (added copy anyway).
But that seemed boring, go to a site, pay for some research, leave. Eh. Boring.
So why not publish those results, and allow other users to engage with and view them. That was a mistake.
I dove straight into turning these loose tools into a social media app. I messed it up so bad. I spent over 100 hours on the feed and data management. Easily eclipsing the time I spend on the sonar systems. It was kinda fun, but even with those hours spent, I still only got baseline social functionality. You can like things, bookmark them, and leave simple comments. That is all.
But that’s kind of okay for a social research platform. We don’t want to bog down the feed with feelings and bios, just cold hard facts. I should add a notif system and maybe profiles.
Reflection
Overall, this was a great experience. I learned a lot about Perplexity and the Sonar Api. The cost of the API, was not as bad as I expected. With the text based prompts, it was around .014 a pro call, but with the .json based prompting it was reduced to 0.008, though I saw mention of something about cost improvements during the hackathon, so I might have benefited from that.
I’ll likely still aim to release a public demo version around judgment day for anyone to play with, but I did release a private demo for judging, which uses mock auth, so make sure you check it out!
Disclosure
I use a TON of AI, mostly Grok and Gemini, to write apps. I also could use a job. So if you know any startups looking for a guy, I’m only a relocation bonus away. Thanks for reading.
Log in or sign up for Devpost to join the conversation.