Inspiration
Version control can often be intimidating for beginners. Along with version control come some of its intricacies: merge conflicts, pull requests, etc. It made us wonder that an objective action like programming, which has a definite goal of solving problems, still has a lot of subjectivity which is reflected in version control. As part of unorganized development practices, beginners working in teams may not checkout their branches in development, or often times be working alongside their peers on the exact same functionality, just trying to make it more robust or more efficient. This led us to develop something that, in a holistic and rigorous sense, evaluates your code and gives detailed feedback in terms of obvious cases, large inputs, edge cases, runtime, and space.
What it does
The MVP build during our hackathon automates intent recognition and test-case generation with the help of open AI, and automates testing it on a given custom implementation, comparing it against a correct one.
How we built it
We build a web-app for the users to interact with their code analyses, refer back to what they often missed (in terms of edge cases or optimizations) when building such brief algorithmic-type or scripting functions. We also leveraged web-scraping, string manipulation, and system level executions to automate testing.
Challenges we ran into
The publicly available Open AI API wasn't of much use for us since it hasn't been trained that much in context of what questions we put it up against. Therefore, we had to web-scrape Chat GPT and develop a reliable mechanism to continuously maintain connection with GPT and keep fetching and posting information in an open AI chat. Furthermore, we had challenges in making it robust, fixing formatting, and using string-formatting to automate error-catching in test_cases() functions so that we don't break our code on false assertions. We also had less experience in web-development, so we learnt the flask framework from scratch, how to link it with SQL tables, and maintain user history.
Accomplishments that we're proud of
We were able to build a working implementation of our MVP except for runtime and memory analysis thus far. Our login mechanism and storage of code history for user is very accessible and easy to interpret, download, and analyze later. Also, we were able to make our code full robust for Python especially to the extent that any function of the form def function(arg1, arg2 ....) can be automatically recognized for intent and be automated tested.
What we learned
We explored a lot in the field of Web scraping, back end and front end development, making system calls and file manipulation. We also learnt a lot about code testing, robustness, and the overall DOM architecture of a well-designed website and how to scrape it efficiently.
What's next for Test Assist
We would like to provide the user with more points of feedback such as time and memory usage, parallel comparison with other users, as well as optimize our data pipeline for time. We would also like to scale, host it on cloud services like AWS or Azure, and make it robust enough for it to be language agnostic
Log in or sign up for Devpost to join the conversation.