Devpost
Participate in our public hackathons
Devpost for Teams
Access your company's private hackathons
Grow your developer ecosystem and promote your platform
Drive innovation, collaboration, and retention within your organization
By use case
Blog
Insights into hackathon planning and participation
Customer stories
Inspiration from peers and other industry leaders
Planning guides
Best practices for planning online and in-person hackathons
Webinars & events
Upcoming events and on-demand recordings
Help desk
Common questions and support documentation
With 26 years of experience and two patents from 2021 for the Emakia system, she developed AI at Forecast Energy and worked in biotech.
YouTube Content Analyzer empowers Reddit communities to evaluate YouTube videos for toxicity, bias, and misinformation. Redditors collaboratively analyze content.
Built an async LLM-powered moderation dashboard detecting toxicity, bias, misinformation, and sentiment—real-time scoring, Redis caching, and Streamlit UI. Modular, reproducible, and demo-ready.
I integrated Dynatrace’s DQL API to evaluate bias and toxicity The MCP Server supports local observability testing during development. Final results will be stored in MongoDB.
LLM: BiasMesh uses agentic LLMs to classify tweet toxicity, bias, and misinformation—mapping ethical risks in graph form for moderation, traceability, and transparency.
Emakia is a modular Streamlit app using Google's ADK to detect and correlate toxicity, misinformation, and bias. With BigQuery and transparent agents, it promotes ethical, user-driven AI workflows.
Detect fake newsEmakia uses cutting-edge LLM technology to detect misinformation, bias, and toxicity—empowering users with reliable content verification in real time.
A system to filter toxic content from social media using Vertex AI's text classifier, validating labels and model outputs with smart AI to enhance accuracy and retrain the model.