Our program was first inspired by Fiserv's API demonstration. We noticed that their data (especially within the JSON files) could be optimized a little more. For example, the value of the phone number was stored as an integer, and was a string later in the same document. Since we had a team member who was slightly familiar with optimization, we wanted to optimize Fiserv's data too, but the program could work for all mass data optimization.
What it does
The project optimizes data so that once it is written in the query writer, it will not have to go back through queryparser and analyzer the next time the user accesses it. In other words, it is similar to the concept of cache, and our program returns data faster than simply getting data using the API.
How I built it
Our team built the program using Lucene, a text search engine library. It parses through the data in the JSON files, separates it into fields to optimize it, and then returns the wanted value.
Challenges I ran into
Calling the Fiserv API through Java proved to be very challenging, partially because the API itself had some issues in documentation and implementation, and also because JAX-RS had several thousand other dependencies to contend with.
Accomplishments that I'm proud of
What I learned
We learned a lot more about Lucene, the text search engine library, as well as the backend of the search engine, ie, how search engines works. Because we learned why it works, we also learned how it works, so the methods behind the code weren't difficult to understand, although we did run into difficulties.