We wanted to create a project using VR or AR technologies, and realized that there is a need for more advanced technologies in the library space, especially in the stacks of larger libraries, so we came up with cover2cover!
What it does
cover2cover uses Google's Vision API to read bar codes on the spines of books in order to report back to the user what books are currently shelved. This is particularly helpful for people who are vertically challenged or have back/joint pain and cannot reach down to the lower shelves to check for their book, and not all books have their identifying information located on the spine.
How we built it
We used Android Studio in order to take advantage of Google's Vision and Search API technologies, and Github to keep everything together!
Challenges we ran into
Database integration was a lot trickier than we had anticipated, as well as the Google Search API, which was a little difficult to manipulate. As relative newbies to Android Studio, that also served some problems but we learned and adapted quickly!
Accomplishments that we're proud of
Having a demo that shows off our core functions.
What we learned
How to use Android Studio & Google APIs
What's next for cover2cover
We think this technology has a lot of potential for public and private libraries, as well as any store with shelved units with unclear markings, such as video game stores with used game items in generic cases. When fully implemented with a database, we think that this could be a useful tool for any organization!