Hello, everyone!
I’m facing an issue while building an LLM workflow in callin.io.
Context:
I am using:
- Supabase (cloud Postgres) to store chat memory via Postgres Chat Memory.
- Postgres Tool.
- AI Agent (LangChain) for handling user messages.
Problem:
The AI Agent keeps replying with the same message every time, as if it does not see the chat history. It seems like it is not reading data from Postgres Chat Memory, although the data is being recorded correctly in Supabase.
What I tried:
- Reconnected Supabase credentials.
- Changed Session ID keys.
- Verified that data is correctly saved in the Supabase memory table.
- Tried using Postgres Tool and Vector Store in combination, but got the same result.
Questions:
- How can I correctly connect Postgres Chat Memory (Supabase) to AI Agent so it actually reads the chat history and considers previous messages?
- Do I need to set any specific parameters or call memory in a certain way in the AI Agent to load history properly?
I would greatly appreciate any help in investigating this issue, as it’s important for implementing reliable long-term memory in a production assistant.
Thank you!
Hello, I hope you are doing well.
I encountered a similar issue previously, and augmenting the memory and tool with optimizations to the system prompt proved effective.
Give it a shot and let me know.
I’ve been working on resolving this for several days.
I attempted to specify in the system message and within the prompt that the agent should utilize the tool for memory retrieval, but it only functions sporadically.
I also directed in the prompt that it must consistently use memory, but I was unable to achieve stable results.
Subsequently, I tried integrating it with Zep, and there it was sufficient to configure memory in the AI Agent node, and it began to operate, which appears peculiar.
If it’s not overly burdensome, could you please provide functional examples of workflows where memory is operating dependably with the AI Agent and Postgres/Supabase Chat Memory?
I aim to comprehend a dependable pattern for linking memory and prompting to attain consistent memory utilization in a production environment.
Thank you in advance for your assistance!
Hi,
Did you select the correct Session Key from the preceding node?
Hi!
Yes, I did set the Session Key correctly:
I use:
={{ $("Telegram Trigger1").item.json.message.chat.id }}
as sessionKey
in the Postgres Chat Memory node,
so it consistently matches the user’s Telegram chat ID across sessions.
The Postgres Chat Memory
is linked to the AI Agent via ai_memory
.
I also connected the OpenAI Chat Model
via ai_languageModel
.
The workflow sends the agent’s response back to the user in Telegram.
Despite this configuration, the AI Agent frequently disregards the memory and replies with the same message repeatedly, as if it isn't properly processing the memory context, even though the records are saved in Supabase.
Could you review this structure to see if it appears correct, or if I'm overlooking something in linking the memory with the agent to ensure it always consults memory before responding?
Thank you in advance for your assistance!
Hi, consider using my workflow instead.
Delete your existing callin.io_chat_histories
table and let the system recreate it.
I've integrated memory using Postgres, and it appears to be functioning, but I'm encountering occasional logic failures.
During the booking process, specifically when reaching the confirmation and name-entry stages, the memory seems to "reset," causing the bot to restart the entire sequence.
Could this issue stem from my system prompt or the Postgres configuration?
I would greatly appreciate any suggestions for troubleshooting and resolving this problem!
Could you show me your chat_message
database?
I noticed you've included very extensive instructions in the user prompt, which is something I haven't seen before. You should only provide the user message and set up the system prompt like this:
You are a friendly admin of the “CyberFox” computer club. Always return STRICT JSON in the format:
{
“action”: “booking | cancellation | question | other | extension”,
“text”: “”,
“data”: {
“date”: “”,
“starttime”: “”,
“endtime”: “”,
“pc_type”: “”,
“name”: “”
}
}
Use conversation history to track booking details and continue from the last step. For bookings:
- Calculate date (YYYY-MM-DD) for “today” or “tomorrow” (Europe/Moscow timezone).
- Convert start_time to HH:mm.
- Calculate endtime as starttime + duration (e.g., “3 hours”).
- Set pc_type from user input (e.g., “regular PC”).
- Set name from user input.
Include the data object only if ALL fields (date, starttime, endtime, pc_type, name) are complete. If any field is missing, do not include the data object and ask for the specific missing fields (e.g., “Please provide your name and session duration”). For price/service questions, respond: “Regular PC: 100 RUB/hour, RTX 4080 PC: 150 RUB/hour, VIP booth: 250 RUB/hour. You can book a time slot anytime!” Respond conversationally and end with: “Is there anything else you need?”
Check the database to ensure it's saving memory correctly and try reducing the Context Window Length
.
The database table includes only the fields id
, session_id
, and message
; this is the default callin.io table. In the session_id
field, I am passing the Telegram chat_id
. Below is an example of the content found in the message
column:
{
"type": "human",
"content": "You are the administrator of the CyberFox club. Respond only in JSON:n{n "action": "booking|question|other",n "text": "reply to the user",n "data": {n "date": "YYYY-MM-DD",n "start_time": "HH:mm",n "end_time": "HH:mm",n "pc_type": "PC type",n "name": "name"n }n}nnIMPORTANT:n- Include the "data" object ONLY if all five fields are providedn- If name, time, or date is missing — request it in the "text" fieldn- "tomorrow" = 2025-07-16, "today" = 2025-07-15nnPrices: Standard PC — ₽100/h, RTX 4080 — ₽150/h, VIP — ₽250/h",
"additional_kwargs": {},
"response_metadata": {}
}
Telegram Bot Memory Issue
Unfortunately, the memory issue has not been resolved. After multiple attempts and tests, it has become apparent that the issue is most likely specific to the Telegram + PostgreSQL integration.
Problematic scenario:
- Telegram + PostgreSQL = memory functionality fails
- The bot fails to retain previous responses
- SessionKey is set up correctly, but data is not persisted
I've been consistently using Telegram and PostgreSQL, and they have performed excellently for my needs.