What it's supposed to do
Transaction Load Manager (TLM) is an application to allow a multitude of transactions to happen in parallel rather than reverting to a queue structure where one must be completed in order for the next to begin. This reduces the workload of the program while creating a faster experience without compromising efficiency. Instead of having all transactions file through one container, there are several containers that transactions can go through and this quantity scales automatically based on need as not to waste resources thereby reducing cost. A user can choose a number of transactions that simulates the number of requests over a given time period. This simulation is displayed through a dashboard providing valuable information about transactions as well as a graph that computes the TLM.
How we built it
We created an Amazon Web Service, using their ECS tool to automatically manage our containers. We manually configured an Automatic Load Balancing (ALB) service within our cluster in order to automatically scale up and tear down when needed. Each cluster runs an image of our “Transaction Processing System” that mimics the layer closest to the RBC deposit/withdrawal system. Our ALB is exposed on a public IP and domain so that external systems can cURL or POST HTTP requests our system. The ALB automatically spins up new pods at 80% CPU utilization, ensuring that no pods are spun up for at least 300 seconds after one has already spawned to prevent premature scaling.
Why we used our tech
We initially used Kubernetes and Docker to begin tackling this project but eventually learned that we needed to use Amazon Web Services’ Elastic Container Service instead. ECS provides easy cluster management, resource efficiency, and reliable security.
We used Python for our backend because of its versatility and extensive compatibility with frameworks and libraries.
Our initial challenge began when we decided to utilize new tools such as Docker and Kubernetes. Since none of us had previous experience, it required a lot of initial configuration, documentation reading, and overall frustration. It wasn’t until many hours in that we reached our first small goal. From then on out, we started gaining momentum towards subsequent goals.
What we learned
We definitely learned more than specific languages, frameworks, or tools during this Hackathon. We learned how to approach problems that are seemingly impossible and daunting at first, but are not too difficult once broken down into sub-problems. This project reinforced the importance of organization and planning before we even wrote a single line of code.
Ways to improve project
After speaking to a couple of mentors, we believe that one of the ways to improve our project is to use EKS with Kubernetes and Docker instead of using simply ECS with Docker. We discussed benefits and tradeoffs and concluded that some of the benefits of Kubernetes include the flexibility of being able to migrate Kubernetes to any existing Cloud-as-a-Service platform. We definitely could have improved on functionality as well; functionality such as CD/CI to ensure the consistency of the workflow.