Skip to content
How to avoid using ...
 
Notifications
Clear all

How to avoid using sub-workflows for simple LLM tool calls in callin.io

7 Posts
3 Users
0 Reactions
5 Views
joshkasaptriosoft
(@joshkasaptriosoft)
Posts: 3
Active Member
Topic starter
 

Question: How to avoid using sub-workflows for simple LLM tool calls in callin.io?

Hi all — I’m working on an AI agent in callin.io that routes incoming user queries to the appropriate tool based on intent. The architecture is pretty straightforward: the agent chooses between Tool A or Tool B, depending on the user’s input.

:wrench:

Each Tool has a focused LLM model with its own specific prompt and behavior — for example:

  • Tool A identifies tone,
  • Tool B summarizes text.

I noticed callin.io supports calling LLMs or tools inside sub-workflows, which is great for complex logic. But in my use case, these are just lightweight prompt wrappers — breaking each one out into a full sub-workflow feels like unnecessary overhead, especially when each sub workflow is super specific to its parent workflow and only consists of a single node.

The problem:

I’d love to just pass inputs from the AI Agent directly to an in-line “LLM call” with its specific prompt (without forcing a sub-workflow structure). But from what I can tell, callin.io doesn’t allow basic Message Model (LLM) nodes to be triggered conditionally from the main workflow, unless wrapped in a sub-workflow. These tools are also very not set in stone, so I’m constantly changing how they work in order to figure out the best setup. So having everything separated into separate workflows where changes need to be made from the parent and sub workflow constantly becomes hard to handle.

It seems callin.io can route to tools that perform code or trigger complex chains via sub-workflows…
…but not simply route to lightweight LLM prompt templates without adding sub-workflow complexity.

My Question:

Is there a better way to structure this so I can keep these simple LLM prompt calls inline, without building separate sub-workflows for each one? Maybe, it’s not directly possible the way I’m thinking but is still possible via a workaround?

Here is an example workflow showing how I currently have it working with sub workflows. Which is not ideal.

 
Posted : 23/06/2025 8:22 pm
Sudhanshu_Sharma
(@sudhanshu_sharma)
Posts: 13
Active Member
 

Hello,

I believe I understand the issue you're encountering.

Yes, there is a workaround available for this.

Just a simple question:
Are you using callin.io cloud or self-hosted?
if your answer is self-hosted, then this is for you :backhand_index_pointing_down:

Thanks for this workaround & solution :white_check_mark:.

 
Posted : 24/06/2025 7:57 am
octionic
(@octionic)
Posts: 9
Active Member
 

As of yesterday, this has become a native feature. With prerelease 1.100.0, a new Model Switcher Node has been introduced. This node can be placed between an Agent and multiple models.

 
Posted : 24/06/2025 8:14 am
Sudhanshu_Sharma
(@sudhanshu_sharma)
Posts: 13
Active Member
 

Ohh… great! I wasn't aware of that… fantastic!

Thanks for the update.

 
Posted : 24/06/2025 12:30 pm
joshkasaptriosoft
(@joshkasaptriosoft)
Posts: 3
Active Member
Topic starter
 

The workflow you shared won't quite work for me because an agent can intelligently call the tools it requires based on the message context and its instructions.

If I have 20 tools, for instance, I don't want the AI to loop through asking the same question to 20 models when only the last model is relevant.

Furthermore, an agent possesses memory/context of tool calls it has made and their responses to better determine if it needs to make another call. For example, tool 20, followed by tool 5 using information received from tool 20, followed by tool 1 using information returned from tools 20 and 5. However, this is all dynamic, not hard-coded, so the AI can take any path it desires. I haven't been able to figure out a good way to replicate this behavior without tool calls from an agent.

I should also provide further clarification that when I mention multiple models for various tasks, I don't truly mean a different base model. For example, I'm using gpt-o4-mini for all of them. But each has a different system prompt to inhibit different behavior when called.

 
Posted : 24/06/2025 1:54 pm
joshkasaptriosoft
(@joshkasaptriosoft)
Posts: 3
Active Member
Topic starter
 

This might work, but I can’t confirm until I test it. I’m currently using Docker, and it appears the image hasn’t been updated yet.

 
Posted : 24/06/2025 1:58 pm
octionic
(@octionic)
Posts: 9
Active Member
 

You need to retrieve the "next" tag.

 
Posted : 24/06/2025 2:10 pm
Share: