Hi callin.io Team,
I’m Kevin Tunis, founder of the AI Automation Innovators community.
We’re preparing to launch a bi-weekly live series where our community collaboratively builds and evaluates AI workflows using callin.io, GitHub, Claude, and Supabase. The series was originally scheduled for May 6 but we postponed to get the architecture right — and we’re now ready to move forward.
The Goal
We want to go beyond screen shares and enable live, guided collaboration inside callin.io. Participants will contribute prompts, see agents run in real time, and use callin.io Evaluations to score outputs both automatically and via human feedback.
Setup Overview
- callin.io (Cloud + Self-hosted): Core workflow engine
- Claude (Anthropic): Agent reasoning & logic
- Supabase: Logs session data and evaluations
- GitHub: Version control of workflows
- Discord: Voice/chat feedback and community ideas
- callin.io Evaluations: Output scoring (light + metric-based)
Diagram attached below
Use Case Flow
- Pull workflows from GitHub
- Participants contribute live via Discord
- Agents run in real time
- Evaluations score the responses
- Updates are committed back to GitHub
- Prompt, output, and score are archived in Supabase
What We’re Hoping to Learn
- Does callin.io support any form of real-time collaborative editing or “safe edit staging”?
- Any best practices for syncing workflows with GitHub?
- Can temporary/role-based access be granted to participants inside callin.io during sessions?
- Are there scaling or evaluation performance limits we should consider?
We’d love the team’s insight or direction before we go fully public with this next month. Happy to provide more detail or demos privately if helpful.
Thanks for building such a flexible platform — we’re excited to use callin.io at the center of this.
Kevin Tunis