One early Wednesday morning, I woke up excitedly to read a new chapter on One Piece. However, there was no translated text yet. With anger, frustration, bargaining, depression, and acceptance, we decided to start this project to prevent this from happening again.
What it does
As of now, it asks for an image URL (specifically a link to one of the chapter's image in some manga) and the program will show you what text are on the image, then the text will be translated afterward.
How we built it
- Python 3
- Microsoft's Cognitive Services (We used their Computer Vision API to extract the text from the image)
- Google Translate API
Challenges we ran into
- Coming up with ideas. We finally came up with one after midnight.
- Productivity crash
- Had to use Google Translate API because we couldn't figure out how to get the API key working for Microsoft's Translator API
- API Quota Limit
Accomplishments that we're proud of
- Making this program
- Learning a lot of cool things about APIs!
What we learned
- Try to come up with an idea before a hackathon
What's next for Raw Manga Translator
- Figure out a way to get all of the images from a chapter
- Improve the translated text
- Add a GUI
- Edit the image directly to change the raw text to translated text
Log in or sign up for Devpost to join the conversation.