A few years back, before COVID and before genAI, my family sit together EVERYDAY in the dining at 8 PM watching the news. It felt unified and shared. During COVID, clarity mattered more than ever, yet information became fragmented across platforms, duplicated endlessly, and divided by language barriers. I imagined someone in Japan trying to understand financial developments in Spain without speaking Spanish or English. Access to global information should not depend on language, and understanding the world should not require ten open tabs. That idea became AiOn.

AiOn is an AI-native global intelligence platform covering multi sectors across every country. It aggregates global information in real time, clusters duplicate headlines into structured narratives, and explains why topics are trending using source diversity and velocity. It supports instant multilingual transformation so users can consume global information in any language. AiOn also connects global events to live stock and currency movements within one unified dashboard. Users can customize countries, sectors, and languages while receiving real-time notifications and personalized shortlists that significantly reduce information overload.I built AiOn alone as a scalable, low-latency streaming system. The frontend is developed using Next.js and deployed on Vercel. The backend is implemented with FastAPI, supporting real-time streaming via Server-Sent Events (SSE). OpenAI powers structured summarization and multilingual output. Claude provides balanced reasoning and contextual analysis. Perplexity Sonar enables grounded research and fact-checking. Bright Data supports large-scale global data ingestion. Elasticsearch combined with vector embeddings powers clustering and semantic retrieval. The system is designed as a modular, production-style pipeline optimized for scalability and reliability.

The primary challenges I got was clustering duplicate multilingual content across global sources, preserving contextual meaning during language transformation, maintaining low-latency streaming performance, and integrating multiple AI providers into a stable routing architecture within hackathon constraints.I built a production-grade AI pipeline solo, integrated multiple AI providers with structured routing and failover logic, implemented real-time streaming updates, and delivered a multilanguage, multi-sector intelligence dashboard that operates like a scalable product rather than a prototype.

I learned how to design and run a multimodal, scalable AI system while actively reducing architectural complexity and latency. Building real-time pipelines requires careful trade-offs between speed, accuracy, and system reliability. The next step is to scale the infrastructure further and optimize performance to reduce latency even more while maintaining stability and precision.Next, I plan to make this next big thing in News industry reducing latency consulting with seniors and so on

Built With

  • anthropic-claude-api-(balanced-reasoning
  • bright-data-api-(large-scale-global-ingestion)
  • circuit-breakers)-deployment-&-cloud:-vercel-(frontend)
  • docker
  • framer-motion-backend:-fastapi-(async)
  • heygen-api-(ai-video-briefings)-search-&-retrieval:-elasticsearch
  • languages:-python
  • multi-turn-analysis)
  • multilingual-transformation)
  • openai
  • perplexity-sonar-api-(grounded-research-and-fact-checking)
  • pub/sub
  • pydantic-v2-ai-&-apis:-openai-api-(structured-summaries
  • redis-(cache
  • render-(backend)
  • server-sent-events-(sse)
  • sqlalchemy
  • tailwind-css
  • typescript-frontend:-next.js-(app-router)
  • vector-embeddings-(semantic-clustering-&-deduplication)-agent-&-inference-infrastructure:-modal-sandbox-agents-for-scalable-multi-agent-workflows-database-&-caching:-postgresql
Share this project:

Updates