Inspiration: The Cost of Cultural Hallucination

As a developer working in Nigeria, I watched a local enterprise attempt to automate their customer outreach using generative AI. The AI translated a formal business contract into Yoruba, but the result was a disaster. Not only did the model completely miss the cultural nuance—addressing esteemed business partners like casual peers but it also failed to mask sensitive customer phone numbers before processing.

It became immediately clear that standard LLMs are not safe for enterprise deployment in low-resource languages. They suffer from severe alignment drift and cultural hallucinations. I realized we didn't need another AI text generator; we needed a strict, hardware-enforced governance layer to supervise the AI. That is how Logi-Guard was born.

What it does

Logi-Guard is a B2B enterprise Quality Control Layer and AI Firewall. It acts as an intermediary between corporate employees and generative AI, utilizing the Logitech MX Creative Console as a physical compliance checkpoint.

  1. Edge Data Loss Prevention (DLP): Before any text is processed, Logi-Guard automatically detects and masks Personally Identifiable Information (PII) like emails and phone numbers.
  2. Tactile Context Routing: Using the MX Dial, human reviewers seamlessly shift the AI's required social persona between Corporate (Strict), Social (Casual), and Cultural (Deep Proverbs).
  3. Quantitative Auditing: Instead of relying on qualitative "vibes," the system grades the AI's output using F1-Scores, Precision, and Recall.
  4. Hardware-Verified RLHF: When a translation is approved, pressing the MX Action Ring cryptographically hashes the log. This turns human oversight into a proprietary dataset for Reinforcement Learning from Human Feedback (RLHF).

How we built it

We architected Logi-Guard using a dual-node system to bridge physical hardware with cloud infrastructure:

  • The Cloud Dashboard: Built with Next.js 15, TypeScript, and Tailwind CSS, hosted on Vercel. This provides the enterprise-grade UI and telemetry visualization using Recharts.
  • The Edge Relay: A lightweight Node.js WebSocket server running locally. This script listens to the Logitech MX hardware via the Options+ software and broadcasts the dial position and button presses directly to the Next.js frontend with near-zero latency.

Challenges we ran into

The most significant engineering hurdle was the "Hardware-to-Cloud Paradox." Secure cloud applications (HTTPS) strictly block connections to local, unsecure hardware ports (ws://localhost). To solve this for the demonstration, we engineered a Dual-Mode engine. It seamlessly defaults to a visual software simulation for remote judges while actively listening for the WebSocket handshake to unlock the physical hardware integration during live deployment.

Additionally, shifting the UX from a complex "developer terminal" to an intuitive, Grammarly-style interface required heavy iteration. We implemented progressive disclosure—hiding the complex F1-Score matrices behind an "Enterprise Telemetry" toggle so the primary translation interface remains clean.

Accomplishments that we're proud of

We successfully pivoted the concept from a basic translation wrapper into a legitimate infrastructure tool. Achieving under 12ms latency between the physical rotation of the MX Dial and the visual UI update in the browser makes the hardware integration feel completely native and instantaneous.

What we learned

We learned that "Accuracy" is a deeply flawed and dangerous metric when evaluating AI in low-resource African languages. Because the volunteer datasets are noisy, AI teams must rely on Precision and Recall. Furthermore, we validated that hardware-in-the-loop is the ultimate safeguard against AI hallucinations; forcing a physical button press breaks the automation complacency cycle.

What's next for Logi-Guard

The immediate next step is integrating a locally quantized Llama-3.2-1B model directly into the edge node, ensuring that highly sensitive corporate translations never leave the local machine. Following that, we plan to package the RLHF Cryptographic Vault as a licensable dataset, allowing foundational model providers to improve their African NLP alignment using our vetted, hardware-verified logs.

What we learned

What's next for Logi-Guard: AI Cultural Firewall

Built With

Share this project:

Updates