Inspiration

Provide an easy, cross-platform, desktop and mobile UI for quick translations without forcing users into a single provider. This helps gaps the bridge between the cultural context on which the words are used, which might not be the same across different providers.

What it does

Single-window Kivy application for translating text with:

  • Provider selection: Google (Web-scraper), Baidu (API), Yandex (API)
  • Source and language pickers.
  • Unicode (non-latin) input for users and program.
  • Runtime credential entry for Baidu and Yandex (stored in env for the running session).
  • Plain-text logging of translations and diagnostic info.
  • Diagnostic UI feedback when adapters fail (import errors, missing keys, or provider errors).

How we built it

Adapter design

  • google_translator_draft3.py | HTML-scraping adapter using requests + BeautifulSoup.
  • baidu_translator_draft2.py | Baidu API adapter (appid/appkey and MD5 signing).
  • yandex_translator_draft1.py | Yandex API adapter (API key + HTTP POST). Kivy UI
  • Single Kivy script that imports adapters dynamically, exposes pickers and message area.
  • Uses background threads to call blocking network functions and mainthread() to update UI.
  • Language mapping per-provider to accommodate differing codes.

Challenges we ran into

Scraping Reliability

  • Google based scraping can be brittle when over-used, google changes language markup as well as blocking frequent scraping. Scarce Kivy UI
  • Due to time constraints, we were unable to polish the Kivy based UI, it is still in its testing phase. Learning a new Python framework
  • Learning a new Python framework instead of using HTML or CSS can be excruciating and extremely painful to learn. As well as Kivy being much, much harder to learn and or code with.

Accomplishments that we're proud of

Learning Kivy as well as knowing how to call for API, how to scrape text.

What we learned

Kivy works well for polished UI's, however as said, due to time constraints, a polished Kivy UI being bug tested couldn't be finished in time. Scraping public web pages is useful for quick prototypes but isn't reliable for production. Official API's are preferable. Good error messages and concise log save a lot of time debugging.

What's next for Python-based Translator

Swap the Google scraper for a maintained API or library if possible, to avoid brittle scraping. Add retry/backoff logic and rate-limit handling for network requests. Improve language support checks so the UI only shows languages a provider actually supports.

Built With

Share this project:

Updates