What inspired us
The idea came from real-life needs we face in everyday development work. We wanted to build a tool that supports the software testing process — fast, structured, and with as little manual effort as possible. This became the foundation of the project.
What we learned
Writing better prompts
We learned how to write efficient and focused prompts. In the beginning, we used long and vague ones, which often led to unclear or broken results. Later, we started writing more structured and precise instructions. We realized that the better the prompt, the better the output.
Communicating clearly with the AI
It became clear that AI is not a mind reader. Every unclear instruction could ruin the output. Over time, we got better at explaining tasks logically and precisely — like writing a strict specification. This helped us save time and reduced frustration.
Problem breakdown is key
For more complex tasks, we noticed that giving everything in one go doesn’t work. Breaking down problems into smaller steps and prompting them one by one gave much better results. This step-by-step method helped us keep control over what was being built.
Small updates are sometimes harder than new features
Surprisingly, modifying existing code was often more challenging than generating something new. Integrating changes into an existing structure without breaking other parts required very careful prompting — or even manual editing.
How we built the project
- We started by defining the app we wanted to create and what features it needed.
- We broke down the goals into smaller modules to keep development manageable.
- At first, we used broader prompts to generate core features.
- Later, we moved to more specific, targeted prompts to improve precision and efficiency.
Challenges we faced
- It was often hard to make the AI understand what we wanted, even for simple tasks like moving a button.
- The system worked best when we told it exactly which file or function to edit.
- Many times we had to fix small issues manually because prompting was too slow or expensive.
- Token usage became a serious limitation. Even small requests could cost 500,000+ tokens.
- In many cases, the AI made changes that didn’t affect the functionality at all — just comments or formatting.
- The generated code was sometimes too large to review. Multi-thousand-line diffs are not practical.
- The system often created very long files instead of smaller, modular ones.
- There was no good way to track how tokens were used, which made optimization harder.
- Version control was weak — especially in team scenarios. Branch handling was unreliable.
- Working as a team was not smooth. The assistant would look for branches in the wrong repository.
- Even managing a project of this size turned out to be difficult.
- Overall, the AI worked better with well-scoped, modular tasks.
- For proof-of-concept development, it was extremely useful and fast.
Technologies we used
- Languages: JavaScript
- Frontend: React with Tailwind CSS (flat UI style)
- Backend: Firebase + Firestore
- Cloud: Firebase Cloud Functions
- Development tools: GitHub
Built With
- bolt
- css
- firebase
- firestore
- github
- javascript
- react
- tailwind
Log in or sign up for Devpost to join the conversation.