We decided to do a data crawler API because we are organizing Polytechnique Montreal first hackathon in two weeks: Hackatown. Because the theme of Hackatown is smart city, it's only natural that big data will play a major role during the event. We wanted to give hackers a reliable, user friendly platform so that they can easily access the different opened datasets offered by Montreal city, and the province of Quebec.
What it does
How we built it
We used top of the line technology like Node.js, Angular2, TypeScript and MongoDB. We seperated the project in two: the client side and the server side. The client side was mostly the front-end of the webApp it was developped with Angular2 and Node.js.
The server side was in charge of crawling the opened data portal of Montreal city and the province of Quebec. We mainly used Node.js with TypeScript to crawl and filter the legitimate datas and we also removed unresponsive links from the Data Base.
Challenges we ran into
We had a problem using the API already present on the opened data portal of both of our sources. We had a litte problem setting up our Database at first, but in the end everything worked fine.
Accomplishments that we're proud of
We're really proud to demo a good project this year at ConUHacks. We did a really good job on the UI, it's really user freindly.
What we learned
We learned how to crawl websites for data using Node.js and we deepend our understanding of new technologies.
What's next for CityMinR
Naturally, we will give access to our CityMinR for the willing students at Hackatown and. We also want to continue adding new cities, new data, new language for code snippets and further polishing.