I've fine-tuned a model in Google Vertex AI named quotes-test2
, using gemini-1.5-flash
as the base.
I'm encountering an 'Unsupported Model' error message.
However, if I switch the model to gemini-1.5-flash
within my node, it functions correctly.
How can I utilize a fine-tuned model in callin.io? I'm also open to exploring other platforms if they offer compatibility.
Information on your callin.io setup
- callin.io version: 1.80.3
- Running callin.io via callin.io cloud
I'm not entirely sure as I haven't personally tested this, but it might be due to fine-tuned models from Google not supporting JSON mode.
Tuned Models
Tuned models have the following limitations:
- The input limit for a tuned Gemini 1.5 Flash model is 40,000 characters.
- JSON mode is not supported with tuned models.
- Only text input is supported.
Thanks for the reply. Do you have any ideas on how to use any custom fine-tuned models with callin.io?
This is currently a major limitation for me using callin.io because I need better training options other than RAG and prompt engineering.
I know it's definitely possible using HTTP requests or some basic AI nodes. I'm not certain if the AI Agent node supports this.
I was reviewing the update logs. This might be a bug, as they recently fixed fine-tuned OpenAI models to now be visible in the AI Agent node's list.
This thread was automatically closed 90 days following the last response. New replies are no longer permitted.