Inspiration

As creators and photographers, we've all experienced the frustration of digging through complex camera menu systems — especially on feature-packed models like the Sony A7. We wanted to eliminate that friction and create an assistant that understands the camera like a pro, but speaks like a friend. That idea became Shashinka AI — a tool designed to help photographers stay in the creative flow without breaking it to decode settings.

What it does

Shashinka AI is a natural language assistant for mirrorless cameras, currently optimized for the Sony A7 series. Users can type questions like: “How do I enable 10-bit video?” or “What's the best profile for S-Log2?”

The system uses:

Vector similarity search to find the most relevant menu items

An LLM (via Vertex AI) to generate clear, friendly explanations

An optional dual engine mode (MongoDB Atlas or FAISS) to serve both low-latency and semantically rich results

All wrapped in a minimal, no-login GitLab Pages interface that loads instant results via iframe.

🛠️ How we built it Frontend: Pure static HTML served via GitLab Pages, with form-based inputs targeting Cloud Run endpoints

Backend (Cloud Run):

Python app using FastAPI or Flask

Dual-mode search: MongoDB Atlas vector search and local FAISS ANN index

LLM layer using Vertex AI or OpenAI’s text-davinci or gpt-4 for explanations

Security: Firebase Auth for user login, API Gateway + Cloud Armor for request throttling

Embeddings: Precomputed via local GPU and stored either in MongoDB or in a .faiss index bundled into the Docker image

Challenges we ran into

Managing concurrent LLM and embedding API calls while keeping latency low

Implementing rate limiting and abuse protection without hurting the UX

Designing a UI that’s useful for both beginner and expert users

Building dual backends (MongoDB + FAISS) in a way that felt seamless to users

🏆 Accomplishments that we're proud of Seamlessly switching between FAISS and MongoDB without changing the user experience

Generating clear, conversational explanations from raw camera documentation

Keeping the tool entirely stateless on the frontend with zero JS dependencies

Deploying both the ANN and LLM search pipelines on Google Cloud Run, fully serverless

What we learned

Deep dive into MongoDB Atlas vector search and hybrid ranking

How to manage concurrency and cost on GCP

The importance of tone and clarity in AI-generated answers for real-world users

That a lightweight frontend can feel powerful when paired with smart backends

What's next for Shashinka AI

Expand support to other mirrorless systems (My personal art powerhouse - Ricoh Gr 3x)

Add image-based search (e.g., upload menu screenshots)

Introduce multi-language support for international users

Offer workflow-based presets (e.g., “cinematic portrait setup”)

Release as a browser extension or mobile companion app for on-the-go use

Built With

  • annindex
  • cloudarmor
  • docker
  • faiss
  • fastapi
  • firebaseauthentication
  • flask
  • gitlabpages
  • googleapigateway
  • googlecloudrun
  • html
  • iframe
  • mongodbatlas
  • python3.10
  • vertexai
Share this project:

Updates

posted an update

Originally planned to use FAISS for fast vector search, but ended using only to MongoDB Atlas Search due to hackathon rules prohibiting local dependencies or unmanaged services. Thankfully, Atlas handled the job well and integrated smoothly with Vertex AI.

Log in or sign up for Devpost to join the conversation.