Inspiration

Frankly it's the result of a joke conversation with a friend about how much ML experimentation can or should be simplified.

What it does

ChatMLE is a voice-first interface for fine-tuning ML models. Instead of writing code, configuring YAML files, and managing data pipelines, you simply speak your intent: "I want to classify customer support tickets into billing, technical, and general inquiries." ChatMLE handles everything from there — parsing your intent, generating synthetic training data with web research, validating data quality, and launching fine-tuning jobs on cloud infrastructure.

How we built it

Anthropic Claude powers the core intelligence: intent parsing from natural language, synthetic training data generation (50 diverse examples), and AI-powered data validation Yutori Browsing API researches real-world examples from the web to inform data generation, finding relevant datasets, patterns, and domain knowledge Anyscale handles model fine-tuning with their OpenAI-compatible API for training Llama and other open-source models OpenAI Whisper + TTS for voice transcription and text-to-speech Built with React, TypeScript, Vite, and Tailwind CSS for a responsive, accessible UI

Challenges we ran into

Railway being down and switching to Render API pivots under pressure: Our original voice provider account got flagged mid-hackathon, forcing a rapid switch to OpenAI Whisper Sponsor API documentation: Some sponsor APIs had limited public documentation, requiring experimentation to understand the correct request formats Parallel development: Coordinating multiple implementation sessions simultaneously while keeping the codebase buildable Balancing depth vs. breadth: Deciding whether to deeply integrate one sponsor API or show broader integration across multiple sponsors

Accomplishments that we're proud of

End-to-end voice workflow: From spoken intent to training job creation without touching code Real sponsor integrations: Not just UI mockups — actual API calls to Anthropic, Yutori, and Anyscale Clean, professional UI: Fluid responsive design that works across devices with proper sponsor attribution Built in hours: A functional ML fine-tuning tool created during a single hackathon sprint

What we learned

The power of voice interfaces for technical workflows — removing friction makes complex tools accessible The importance of graceful degradation — when one service fails, the app should still function Sponsor APIs have varying levels of maturity; reading the docs carefully (or asking the founders directly!) saves time

What's next for ChatMLE

Yutori Scouting integration: Automated monitoring for new training data sources that match your intent Training progress visualization: Real-time loss curves and metrics from Anyscale jobs Model evaluation: Test your fine-tuned model directly in the app with voice input Multi-turn conversations: Refine your intent through dialogue instead of a single prompt Export to production: One-click deployment of fine-tuned models to inference endpoints

Built With

  • anthropic-claude-api
  • anyscale-fine-tuning-api
  • lucide-react
  • openai-tts-api
  • openai-whisper-api
  • react-18
  • tailwind-css
  • typescript
  • vite
  • web-audio-api
  • yutori-browsing-api
  • yutori-scouting-api
Share this project:

Updates