For my research purpose i need to develop such AI recommneder system who would be perfect and modren trends of AI are also very high which inspired me to work on these technologies.

What it does

we propose to exploit user-item interactions to guide the representation learning in each modality, and further personalized micro-video recommendation. We design a Multi-modal Graph Convolution Network (MMGCN) framework built upon the message-passing idea of graph neural networks, which can yield modal-specific representations of users and micro-videos to better capture user preferences. Specifically, we construct a user-item bipartite graph in each modality, and enrich the representation of each node with the topological structure and features of its neighbors. Through extensive experiments on three publicly available datasets, Tiktok, Kwai, and MovieLens, we demonstrate that our proposed model is able to significantly outperform state-of-the-art multi-modal recommendation methods.

How I built it

Its in process.

Challenges I ran into

Internet shortage of services,memory like GPU and Datasets

Built With

Share this project: