Inspiration
We’re all too familiar with the “late‐night CFP scramble.” Picture this: a cup of lukewarm coffee in one hand, a half‐written paper open in another tab, and 17 different conference websites screaming “Submit now!” in your notifications. That was me—ruffling through CFP listings late into the night during my own paper submissions. I remember matching my abstract on adaptive signal processing to a niche workshop in Spain, frantically cross‐checking scope statements, and juggling a spreadsheet of deadlines. It felt like a part‐time job just to find the right venue. We realized how absurd it was that in 2024 we still had to manually sift through pages of CFP listings, cross‐check scopes, and pray we’d catch the submission window on time.
“Surely someone’s automated this,” we thought. But after a few hours of digging, we found only stale CSV files or half‐broken aggregator sites. That’s when the lightbulb flickered: “What if an AI could do the heavy lifting?” We wanted something that not only matched keywords but actually understood our abstract, saw the scope overlap, and whispered, “Hey, buddy—this conference is perfect for you.” That little spark led to the birth of AI CFPs.
What it does
AI CFPs is your research‐submission sidekick. Here’s the rundown:
- Smart Matching: You feed in your paper’s title, abstract, and keywords. Our AI dives in, reads between the lines, and spits out a ranked list of CFPs that fit like a glove—no more guesswork.
- Deadline Tracking: We’re not just tossing URLs at you. You get real‐time alerts (email or in‐app) whenever a deadline is closing in. It’s like having a helpful friend shouting, “Dude, your CFP deadline is tomorrow!”
- Save Favorites: Spot a promising call but haven’t finalized coauthor sign‐off yet? Hit “Favorite,” and it lives in your personal watchlist.
- Export Results: Want to share your matches with a lab mate or advisor? Download the curated list as a clean PDF or CSV for offline review.
- Tailored Recommendations: Need to tweak your introduction or adjust formatting? Our AI suggests adjustments to align your paper with each CFP’s specific requirements.
- Global Coverage: We’ve aggregated CFPs from major disciplines worldwide—engineering, humanities, medicine, you name it—so you’re not limited to just one region or publisher.
How we built it
No‐Code Magic with Bolt:
- Instead of wrestling with databases and deployment scripts, we used Bolt’s visual workflows. Drag, drop, link—boom, you have a live matching engine.
- Bolt’s built‐in connectors let us pull CFP metadata from public endpoints without writing a single line of backend code.
- Instead of wrestling with databases and deployment scripts, we used Bolt’s visual workflows. Drag, drop, link—boom, you have a live matching engine.
Data Aggregation:
- We configured the system to run on‐demand scrapes (via pre‐built integrations) from conference websites and open APIs. When a new CFP appears, it automatically ingests and normalizes it.
- We configured the system to run on‐demand scrapes (via pre‐built integrations) from conference websites and open APIs. When a new CFP appears, it automatically ingests and normalizes it.
AI Matching:
- We integrated a pre‐trained language model that takes your title, abstract, and keywords, then returns a similarity score against each CFP description.
- We integrated a pre‐trained language model that takes your title, abstract, and keywords, then returns a similarity score against each CFP description.
UX Flow in Figma & Bolt:
- UI prototypes started in Figma for quick feedback. Then we rebuilt the interface inside Bolt’s front‐end widgets—simple forms, instant preview of matches, and one‐click “Favorite” buttons.
- UI prototypes started in Figma for quick feedback. Then we rebuilt the interface inside Bolt’s front‐end widgets—simple forms, instant preview of matches, and one‐click “Favorite” buttons.
Notifications & Exports:
- Bolt’s built‐in scheduling and email modules send deadline alerts. When you hit “Export,” Bolt generates a PDF or CSV on the fly and emails it to you.
Challenges we ran into
Data Consistency:
Every conference site formats CFP details differently. Some list deadlines as “15 Sep” without year or timezone. We had to build normalization steps in Bolt to infer missing info (e.g., assume the upcoming year when none is provided).AI Tuning in a No‐Code Environment:
Bolt’s AI connector gave us out‐of‐the‐box language models, but fine‐tuning meant uploading custom datasets. Figuring out the right prompts and parameters through a GUI was a juggling act—there were weeks when our “semantic match” felt more like semantic miss.Scaling Scrapes Without Code:
Some conference websites block automated requests. We leaned on Bolt’s proxy settings, but occasionally we’d hit CAPTCHAs. That meant manually intervening, adjusting settings, and resuming the flow—painful when you want true “set it and forget it.”Balancing Speed vs. Accuracy:
Researchers hate waiting. We tweaked our pipeline so initial matches appear in under 60 seconds, even if the AI hasn’t perfectly fine‐tuned every semantic score yet. Then, in the background, we run a more thorough pass to update scores.Deadline Timezone Confusion:
A CFP deadline of “23:59 JST on June 1, 2025” suddenly looks like “7:29 PM IST.” Bolt’s date parsers helped, but we still had to add logic to convert everything to UTC internally and show it in the user’s local timezone.
Accomplishments that we're proud of
- Built Entirely in No‐Code Using Bolt: From UI to AI matching to notifications and exports—all visual workflows, zero custom backend code. We actually demoed a working prototype within two weeks of signing up for Bolt.
- 50 Interested Researchers on Day One: We shared a simple landing page and collected emails. By the end of Day 1, fifty paper writers and researchers signed up, eager to try an AI‐powered CFP matcher.
What we learned
- No‐Code Isn’t a Magic Wand: Bolt accelerated development, but data normalization and AI tuning still required careful attention. We spent more time planning flows and testing edge cases than writing code ever would have taken.
- Early Feedback Matters: Those first 50 users weren’t just numbers—they pointed out missing conference categories, confusing deadline displays, and small UI hiccups. If we hadn’t gotten them onboard Day 1, we’d still be guessing what mattered most.
- Deadline Handling Is Tricky: Even in a no‐code tool, date parsing and timezone conversions can kill you. Standardizing on UTC internally and then rendering local times to users was non‐negotiable.
- Focus on Core Value: In a world where most platforms promise “AI magic,” our early users kept asking, “Does it actually find relevant CFPs?” That pushed us to focus on matching accuracy first, then bells and whistles later.
What’s next for AI CFPs
- Collaborative Workspaces in Bolt: We’re building shared boards where coauthors can comment on each other’s “Favorite” CFPs, assign tasks, and see who’s handling which part of the submission.
- Mobile Push Alerts: Bolt’s mobile SDK is on our roadmap, so you’ll get a “Deadline in 24 hours!” push notification right on your phone.
- Deeper AI Customization: We want each user to fine‐tune matching—like saying, “Only show me IEEE/ACM CFPs,” or “Prioritize journals over conferences.” That’ll involve more custom model training in Bolt.
- Multi‐Language Interface: Opening Spanish and Mandarin support next quarter, so non‐English speakers can navigate in their own language.
- Publisher API Integrations: We’re in talks with a few major publishers to pull CFP data directly—no scrapers, no hassle.
Stay tuned—AI CFPs is just getting started. We built this entire thing in Bolt, but our mission remains the same: let researchers spend time on ideas, not deadlines.
Built With
- bolt
- netlify
- openai
- react
- shadcnui
- supabase

Log in or sign up for Devpost to join the conversation.