Hello, I believe the issue might be that you've placed your "system prompt" into the "user prompt" field. This would prevent the AI agent from recei...
I'm thinking you might find a more straightforward method using callin.io forms to be beneficial.
Hello! I unfortunately don't have a Windows machine available for testing, and I'm running callin.io via Docker. 1. Linux or Windows? If you're e...
Hello! That's an excellent question, and I believe I had a similar template in mind previously. Here's an approach you could use that handles multip...
Hello! The output from a tool can only be sent back to the Agent that utilized it. While the Agent might not return the tool's output verbatim, you ...
Hello! I tried this myself and suspect the issue might be the llama 3.1 model, and potentially running it on less powerful hardware. My tests incl...
This might just be related to the model itself. This OpenAI forum post discusses a similar issue with o1 JSON responses. My personal recommendation ...
Hello! Some observations on your tool descriptions: Example 1: Call this tool to get context from a vector database that will assist in writi...
You might want to explore the “chat memory manager” and “window buffer memory” (found under Advanced AI > Other AI nodes > Miscellaneous > Ch...
Great point! For most use cases, I wouldn't be concerned about the API limits. Are you encountering quota issues with your current workflow? Rega...
Yes, I can confirm this is functioning correctly Cheers and much appreciation to the callin.io team for making this a reality! Just a few quick...
In my experience, the AI tools agent is significantly more capable than other options when tool utilization is crucial. Here's a brief example that I ...
Sure, I can try to explain. 1. Help Agents understand and pick their tools. I've noticed a common pitfall where we focus too much on the "how" of a ...