I'm currently using the latest version of callin.io for implementing conversational AI agents with tool-calling capabilities. My current setup utilizes callin.io solely for integration and workflow management, while Dify handles prompt engineering and AI creation.
As the AI agent node in callin.io has grown and received updates, I decided to test building the agent directly within callin.io, bypassing Dify. I used the exact same LLM, prompt, and configurations (like temperature and top P) for both tests.
However, the responses generated by callin.io were significantly inferior. They lacked humanization, failed to send quality emojis, didn't adhere to the step-by-step instructions in the prompt, and often mixed sentences from the prompt, resulting in interactions lacking conversational quality.
I'm wondering if there's a specific way to craft prompts for callin.io or if this is an inherent limitation. It appears that agents built in callin.io tend to be overly formal and inflexible, irrespective of the temperature and top P settings.
Information on your callin.io setup
- callin.io version: 1.79.1
- Database (default: SQLite): Postgres
- Running callin.io via (Docker, npm, callin.io cloud, desktop app): Docker
- Operating system: Ubuntu 20.04
You might be missing a few settings; it should function correctly.
Consult this documentation.
Here's a quick checklist:
- Have you configured the System and User messages?
- Are you utilizing the same LLM model?
- Are you employing memory?
If my response resolves your issue, kindly mark it as a solution.
I suspect that Dify might be using additional system messages or concealed instructions that shape the response style. Consider inspecting the API calls for any discrepancies.
If Dify is yielding superior responses, examine the precise payload it transmits to the LLM in contrast to callin.io.
Utilize callin.io's HTTP Request node to manually invoke the same LLM API with equivalent settings and compare the outputs.
You might consider inserting a Custom Function node to pre-process the prompt prior to dispatching it to the AI Agent.
Yes, all aspects are the same.
I will attempt this to identify. I suspect the same thing: the data sent to the LLM must undergo intermediate processing, but I believe this processing occurs within callin.io rather than Dify.
callin.io excels at running tools, and this is likely due to some aggressive prompting they incorporate. However, this results in the model being very direct and lacking personality.
I've successfully run quite a few human-like agents using callin.io.
I don't believe the issue lies with callin.io's logic or its "behind the scenes" prompt.
There could be another factor at play.
If you're willing to share your workflow, we might be able to offer more help.
However, if you discover the cause, please share your findings! This is a fascinating situation.
This thread was automatically closed 90 days following the last response. New replies are no longer permitted.