We wanted to come up with a way to help designers get their mockups from early drawings to live code more efficiently. We believe that this computer-vision learning will revolutionize the product development process and allow teams to move faster in a much more efficient manner.
What it does
This technology will map hand-drawn symbols within wireframes to components in our pattern library, thereby accelerating the render of higher fidelity mocks.
How I built it
With a camera and open-source machine learning, we taught a computer to recognize hand-drawn symbols and classify them as Homebase components, and render prototype code instantly in a browser.
Challenges I ran into
Working with an early prototype open source machine learning software with very little documentation was difficult to set up.
Accomplishments that I'm proud of
Getting the machine to learn and correctly classify Homebase components given a limited set of data.
What I learned
Having servers talk to each other is much more challenging than anticipated.
What's next for Real time rendered sketches to live components
Building a robust backend to send classification strings to feed into React Homebase components. We also want to collaborate with designers to help implement this into our current development workflow.