Inspiration
My own personal experience of the number of constraints that would be imposed at every turn of the purchasing journey. Like desiring low miles but needing the price to be a deal or the miles being higher but the car having a single owner with appropriate and best care being provided for the car versus the sample at large.
What it does
It originally was going to be more broad in scope and assess whether the way the descprition was phrased on the car listing was indictiative of too good to be true case. But this required the features to be quantified in the data of what may being making a car a deal breaker as opposed to not being the case. In other words you can directly provided the copy pasted text or provide the face book marketplace link and checks whether the language used to describe the car is within sample or not.
How we built it
Techstack: Langchain Marimo Nodejs Polars Tensorflow
First did all the data analysis nesscary to first train a rudimentary model in tensorflow then expanded the usage to use langchain due to time constraints of the epoch training taking place.
Challenges we ran into
Wrangling larger datasets requiring a different skillset than the typical python data analysis and not being to fully grasp the new methods in time for the model to be trained entirely on shop listings for cars.
Accomplishments that we're proud of
What we learned
What's next for NLP lemon discourse
Going back and finding more explicit listing descrpitions to use with the purpose of continuing to finetune the model for assessing whether or not the sentiment held towards the car is within sample or requires more scrutiny to the seller.
Built With
- altair
- javascript
- polars
- python
- tensorflow
Log in or sign up for Devpost to join the conversation.