Inspiration
Faced with the 'black box' nature of advanced AI, our goal was to make AI decisions transparent and understandable, bridging the gap between complex machine learning models and human comprehension.
What it does
Our XAI model utilizes a Retrieval-Augmented Generation approach, combined with Wikipedia data, to explain AI decisions clearly. It transforms complex AI outputs into understandable insights, enhancing trust in AI applications.
How we built it
We integrated a RAG model for depth and context, using Wikipedia data for broader understanding. We also employed LIME for local, interpretable explanations, focusing on user-centric clarity.
Challenges we ran into
We had a hard time conceptualizing our idea, and we werent sure if it would work, we had a hard time with the api as the prompt would change everytime, and we needed to understand the output for each input we were putting in
Accomplishments that we're proud of
Successfully creating a model that not only deciphers AI decisions but also explains them in an easily comprehensible manner. And also having an output that makes sense
What we learned
The intricacies of AI systems, the importance of transparency in AI, and the critical role of user-friendly explanations in technology adoption.
# What's next for Explain ML Further refining our model for broader applications, exploring more datasets for context, and expanding our tool's capabilities to cover a wider range of AI models and industries.
Built With
- cohere
- lime
- python
- wikiapi
Log in or sign up for Devpost to join the conversation.