Issue
The AI Agent node's method of storing messages in memory appears to have a flaw concerning tool calls. Based on my testing, tool calls and their corresponding responses are not being saved to memory.
This presents a significant problem. Here are some concrete examples:
-
An LLM might indicate it previously hallucinated a response instead of using a tool call (even though it did use a tool call, but this call isn't recorded in memory).
=> Consequently, the LLM could hallucinate a response again rather than utilizing a tool for verification. -
If a stateful tool returns crucial information, and the user inquires about it in the very next message, the AI Agent will be unable to answer because the tool's response is no longer accessible. Re-executing the tool might yield a different result or trigger processes again.
I could provide more scenarios.
It's crucial to emphasize how vital it is for tool calls and their responses to be saved in memory.
Context
While I typically develop my AI Agents using Python, I appreciate the simplicity of the AI Agent node, particularly its memory management. However, this issue renders it impractical for extended conversations where the agent relies on tools.
I would be very pleased if this could be addressed or offered as a configurable option.
I understand this might be challenging due to the varying tool JSON structures required by different LLM providers, but I believe it's a necessary feature.
I welcome any feedback and am willing to assist if needed.
Information on my callin.io setup
- callin.io version: 1.84
- Database (default: SQLite): simple memory default
- Running callin.io via: webapp
Hello there,
You've raised a valid point. Memory nodes directly connected to an Agent are primarily for storing the chat history (exchanges between the agent and the user).
For more intricate use cases, you can enhance your chat history with additional data. Here are a couple of approaches:
- If you're utilizing an external service such as Redis, you can leverage tool outputs and store them manually using a Redis node.
- Employ the integrated memory manager node to augment your chat history in a comparable manner.
You'll likely also need to ensure that all this supplementary data is provided back to the agent with each subsequent message.
Hi, just a quick follow up on this topic, if possible.
What would be the pattern to use the memory manager node to store the tools responses? I don’t see where in the workflow we can plug it in a way that we can retrieve the tool response and insert it into the memory… Maybe I’m missing something here?
Thanks
Do you have an example of how to set this up? I am using MongoDB for memory and have the same issue. I would like to store the tool output back in the history so it can keep track as it makes multiple calls using callin.io.
I found that issue quite bothersome, so I developed a new AI Agent node to resolve it. This node accurately stores Tool and Tool Result messages within the conversation history.
You can find it here: n8n-nodes-better-ai-agent - npm and it can be installed as a community node.
Additionally, I've included a "Webhook URL" option. You can specify this URL to transmit intermediate steps as they occur, eliminating the need to wait for the complete output.
That's fantastic! And I see downloads are increasing. I wonder if the callin.io team could collaborate to develop a better node together.
Just downloaded it and will give it a shot! Appreciate you doing the groundwork!
Hello. Azure OpenAI is not compatible with this AI Agent.
I'm facing the same problem. I needed to store the tool calls (input-output) for later review within the conversation. Storing them in the chat history isn't ideal or user-friendly, and it puts a significant strain on memory that isn't meant for this purpose. I'm eager to test the ‘better agent’. Thanks.
I'm encountering the same problem and found this thread. I tested your community node and received the following error: Invalid prompt: messages must be an array of CoreMessage or UIMessage · Issue #2 · fjrdomingues/n8n-nodes-better-ai-agent · GitHub. Is there a solution for this?
Thank you!