As generative AI becomes widely used, one of its biggest challenges is hallucination producing confident but incorrect answers and unsafe prompt manipulation. Our project, PromptOS( Prompt Sanitation & Hallucination ) was built to address both problems at the source. It first sanitizes user prompts to detect unsafe or manipulative instructions, preventing prompt injection attacks. Once a prompt passes safety checks, it is processed by the Gemini API to generate a response. The output is then analyzed using a hallucination detection layer that evaluates uncertainty patterns and factual consistency, producing a confidence score to help users judge reliability. The output is generated once it is connected with Gemini 3 API. Through this project, we learned how critical responsible AI design is and how combining prompt hygiene with response validation can significantly improve trust in AI systems. Our goal is to make AI interactions safer, clearer, and more reliable for everyday users and developers alike.
Log in or sign up for Devpost to join the conversation.