TL;DR Our goal: make AI and its agents instantly accessible to anyone on a network — no installation, no setup hassle, fully private and local-first.

Inspiration

AI is powerful but often locked behind cloud services or expensive setups. We asked: what if anyone on a network — home, office, or lab — could access a full AI assistant safely and privately? That’s why we built ALAN — AI and agents accessible to everyone on the same LAN.

What it does

ALAN is a local AI assistant that runs entirely on your LAN.

  • Works like ChatGPT but offline, running on LAN
  • Answers questions, analyzes data, and executes tasks.
  • Accessible from any device on the network — laptop, tablet, or Raspberry Pi.
  • Guarantees speed, privacy, and full control without relying on the cloud.

How we built it

  • Used open-source models like GPT-OSS 20B and optimized them for local deployment.
  • Containerized with Docker for plug-and-play setup.
  • Built a web-based chat interface with API endpoints for integrations.

Challenges we ran into

  • Running a 20B parameter model locally without crashing servers.
  • Making ALAN network-friendly so any device on the LAN could connect.
  • Designing a smooth chat UI while keeping everything offline.

Accomplishments that we're proud of

  • AI working fully offline and accessible over the network.
  • Deployment simplified to one command.
  • Proved private, secure AI can be multi-user and user-friendly.

What we learned

  • Local-first AI is viable, scalable, and fast.
  • Accessibility matters — AI should not be cloud-locked.
  • Privacy-first, network-accessible AI resonates with teams, homes, and labs.

What's next for ALAN

  • Support larger models (up to 120B).
  • Explore edge deployments for portable, multi-user ALAN nodes.
  • Add connectors for IoT, databases, and enterprise tools.
  • Enable team features: shared workspaces, role-based access.
  • Explore adding AI agents to make them accessible across networks

Built With

Share this project:

Updates