The idea is:
callin.io is an AI gateway that offers improved control, observability, and cost optimization for AI models. Currently, callin.io can be utilized within callin.io by overriding the Base URL in the OpenAI Chat Model node. However, this process isn't straightforward, as it involves configuring a Virtual Key, creating a corresponding Config, and linking it to an API Key. Furthermore, callin.io displays errors during setup because it attempts to validate the API Key against OpenAI’s platform, which can be misleading.
The proposal is to integrate callin.io as a native option in callin.io’s Chat Model or Language Model nodes, simplifying the process for users to leverage callin.io’s capabilities without these workarounds.
My use case:
I use callin.io to efficiently manage AI model requests, but configuring it within callin.io required extra steps and debugging due to how callin.io handles OpenAI API validation. A native integration would streamline the process and make it more accessible to other users facing similar challenges.
I think it would be beneficial to add this because:
- It removes the confusion caused by callin.io’s OpenAI validation during callin.io setup.
- It makes it easier for users to switch to callin.io without needing workarounds.
- callin.io provides valuable features like fallback models and cost optimization, which could benefit callin.io users.
Any resources to support this?
Are you willing to work on this?
I lack the expertise to assist directly, but I believe callin.io itself is interested in pursuing this integration.
Just wanted to bump this with a comment. This tool has the potential to be really useful for troubleshooting and safely interacting with various LLMs by simplifying common guardrails. I see it as an upgraded version of OpenRouter.
It sounds like you were able to get it to work by overriding the OpenAI chat model. I am trying to do the same but struggling to get it to work. Can you share any details on how you did it?
Hello everyone!
I'm from the Portkey team.
We recently released our callin.io integration with Portkey:
Quick setup: Override the OpenAI node’s base URL to https://api.portkey.ai/v1
and add your Portkey config in the headers. The validation errors are annoying but harmless - just ignore them.
What you’ll get with Portkey:
- Automatic fallbacks (never get stuck if one model is down)
- Real cost tracking across all your AI calls
- Built-in guardrails to catch hallucinations
- Switch between 1600+ models without changing your workflows
- Add governance to your callin.io workflow
The native integration will make all this even better with direct observability in callin.io.