Skip to content
Issue using custom ...
 
Notifications
Clear all

Issue using custom model with Google Vertex Chat Model

5 Posts
3 Users
0 Reactions
3 Views
d0ntcri
(@d0ntcri)
Posts: 2
New Member
Topic starter
 

I've fine-tuned a model in Google Vertex AI named quotes-test2, using gemini-1.5-flash as the base.

I'm encountering an 'Unsupported Model' error message.

However, if I switch the model to gemini-1.5-flash within my node, it functions correctly.

How can I utilize a fine-tuned model in callin.io? I'm also open to exploring other platforms if they offer compatibility.

Information on your callin.io setup

  • callin.io version: 1.80.3
  • Running callin.io via callin.io cloud
 
Posted : 11/03/2025 1:55 pm
ThinkBot
(@thinkbot)
Posts: 26
Eminent Member
 

I'm not entirely sure as I haven't personally tested this, but it might be due to fine-tuned models from Google not supporting JSON mode.

Tuned Models
Tuned models have the following limitations:

  • The input limit for a tuned Gemini 1.5 Flash model is 40,000 characters.
  • JSON mode is not supported with tuned models.
  • Only text input is supported.
 
Posted : 15/03/2025 5:38 pm
d0ntcri
(@d0ntcri)
Posts: 2
New Member
Topic starter
 

Thanks for the reply. Do you have any ideas on how to use any custom fine-tuned models with callin.io?

This is currently a major limitation for me using callin.io because I need better training options other than RAG and prompt engineering.

 
Posted : 18/03/2025 2:57 pm
ThinkBot
(@thinkbot)
Posts: 26
Eminent Member
 

I know it's definitely possible using HTTP requests or some basic AI nodes. I'm not certain if the AI Agent node supports this.

I was reviewing the update logs. This might be a bug, as they recently fixed fine-tuned OpenAI models to now be visible in the AI Agent node's list.

 
Posted : 22/03/2025 6:11 pm
system
(@system)
Posts: 332
Reputable Member
 

This thread was automatically closed 90 days following the last response. New replies are no longer permitted.

 
Posted : 20/06/2025 6:11 pm
Share: