Skip to content
Supabase vector sto...
 
Notifications
Clear all

Supabase vector store error case

10 Posts
3 Users
0 Reactions
5 Views
Oskar_Niedojadlo
(@oskar_niedojadlo)
Posts: 6
Active Member
Topic starter
 

Describe the problem/error/question

Hello. I'm quite frustrated as I've been trying to resolve this persistent error for the past two days, which has taken longer than building a fully operational orchestration appointment workflow using Supabase + callin.io. I'm attempting to create an FAQ agent that works with documents in Supabase and a Vector Store tool (as a tool for the AI Agent). Every single Supabase node, including custom HTTP request function calls, worked perfectly for my appointment workflow. So, I decided to build a fan bot. I can insert documents into my Supabase documents database without any issues whatsoever; everything works fine. I've even run SQL commands that confirm the documents are in the database and can be found! When I ask a simple question like 'where is the bar,' the agent takes a very long time running the vector tool (text embedding 3 small, 1536, batch 3, strip new lines on, timeout 30) and retrieves NOTHING AT ALL. It results in a 'Request timed out' error. I initially thought I might have made a mistake by adding titles and categories in the metadata, so I DROPPED EVERY FUNCTION IN MY SUPABASE AND TABLE THAT HAD TO DO WITH DOCUMENTS, and tried running it as the documentation suggests:

– Enable the pgvector extension to work with embedding vectors
create extension vector;

– Create a table to store your documents
create table documents (
id bigserial primary key,
content text, – corresponds to Document.pageContent
metadata jsonb, – corresponds to Document.metadata
embedding vector(1536) – 1536 works for OpenAI embeddings, change if needed
);

– Create a function to search for documents
create function matchdocuments (
query
embedding vector(1536),
matchcount int default null,
filter jsonb DEFAULT '{}'
) returns table (
id bigint,
content text,
metadata jsonb,
similarity float
)
language plpgsql
as $$
#variable
conflict usecolumn
begin
return query
select
id,
content,
metadata,
1 - (documents.embedding <=> query
embedding) as similarity
from documents
where metadata @> filter
order by documents.embedding <=> queryembedding
limit match
count;
end;
$$;

LangChain | Supabase Docs , Supabase Vector Store node documentation | callin.io Docs

I'M GETTING THE SAME ERROR EVEN WITH ONLY ONE SINGLE DOCUMENT IN THE DATABASE???

I suspected my database might be corrupt, so I created a completely new one just to test it again. I followed the documentation precisely. Guess what? The same error.

:slight_smile:

… I've watched numerous tutorials on setting up vector stores on YouTube, etc. It should be incredibly simple to set up, and I'm certainly not new to this! Since I'm completely at a loss for how to fix this problem, please help me. Perhaps an callin.io manager could review my account and identify the issue, because I've set up complex workflows. However, this ridiculous situation is making me want to give up. It's also not an issue with my agent prompt or the tool description, not even close.

What is the error message (if any)?

Request timed out.

Request timed out.

Ask Assistant

Error details

Other info

n8n version

1.100.1 (Cloud)

Time

7.7.2025, 15:16:11

Error cause

{ "name": "TimeoutError", "attemptNumber": 7, "retriesLeft": 0 }

Please share your workflow

(Select the nodes on your canvas and use the keyboard shortcuts CMD+C/CTRL+C and CMD+V/CTRL+V to copy and paste the workflow.)

{
“nodes”: [
{
“parameters”: {
“options”: {}
},
“type”: "@callin.io/callin.io-nodes-langchain.chatTrigger",
“typeVersion”: 1.1,
“position”: [
-360,
100
],
“id”: "",
“name”: "When chat message received",
“webhookId”: ""
},
{
“parameters”: {
“options”: {
“systemMessage”: ""
}
},
“type”: "@callin.io/callin.io-nodes-langchain.agent",
“typeVersion”: 2,
“position”: [
-100,
100
],
“id”: "",
“name”: "AI Agent"
},
{
“parameters”: {
“model”: {
“__rl”: true,
“mode”: "list",
“value”: "gpt-4.1-mini"
},
“options”: {}
},
“type”: "@callin.io/callin.io-nodes-langchain.lmChatOpenAi",
“typeVersion”: 1.2,
“position”: [
-200,
320
],
“id”: "",
“name”: "OpenAI Chat Model",
“credentials”: {

}
}
},
{
“parameters”: {
“mode”: "retrieve-as-tool",
“toolDescription”: "Nutze dieses tool um info aus unserer Datenbank zu bekommen",
“tableName”: {
_rl”: true,
“value”: "documents",
“mode”: "list",
“cachedResultName”: "documents"
},
“topK”: 3,
“options”: {
“queryName”: "match
documents"
}
},
“type”: "@callin.io/callin.io-nodes-langchain.vectorStoreSupabase",
“typeVersion”: 1.3,
“position”: [
260,
320
],
“id”: "",
“name”: "Supabase Vector Store",
“credentials”: {

}
}
},
{
“parameters”: {
“options”: {
“dimensions”: 1536,
“batchSize”: 3,
“stripNewLines”: true,
“timeout”: 30
}
},
“type”: "@callin.io/callin.io-nodes-langchain.embeddingsOpenAi",
“typeVersion”: 1.2,
“position”: [
160,
480
],
“id”: ",
“name”: "Embeddings OpenAI",
“credentials”: {

}
}
},
{
“parameters”: {
“tableId”: "feedback",
“fieldsUi”: {
“fieldValues”: [
{
“fieldId”: "documentid",
“fieldValue”: "={{$fromAI('document
id')}}"
},
{
“fieldId”: "feedbacktype",
“fieldValue”: "={{$fromAI('feedback
type')}}"
},
{
“fieldId”: "usercomment",
“fieldValue”: "={{$fromAI('user
comment')}}"
},
{
“fieldId”: "question",
“fieldValue”: "={{$fromAI('originalquestion')}}"
}
]
}
},
“type”: "callin.io-nodes-base.supabaseTool",
“typeVersion”: 1,
“position”: [
120,
320
],
“id”: "",
“name”: "log
feedback",
“credentials”: {
}
}
},
{
“parameters”: {},
“type”: "@callin.io/callin.io-nodes-langchain.memoryPostgresChat",
“typeVersion”: 1.3,
“position”: [
-20,
320
],
“id”: "",
“name”: "Postgres Chat Memory",
“credentials”: {

}
}
}

],
“connections”: {
“When chat message received”: {
“main”: [
[
{
“node”: "AI Agent",
“type”: "main",
“index”: 0
}
]
]
},
“OpenAI Chat Model”: {
“ailanguageModel”: [
[
{
“node”: "AI Agent",
“type”: "ai
languageModel",
“index”: 0
}
]
]
},
“Supabase Vector Store”: {
“aitool”: [
[
{
“node”: "AI Agent",
“type”: "ai
tool",
“index”: 0
}
]
]
},
“Embeddings OpenAI”: {
“aiembedding”: [
[
{
“node”: "Supabase Vector Store",
“type”: "ai
embedding",
“index”: 0
}
]
]
},
“logfeedback”: {
“ai
tool”: [
[
{
“node”: "AI Agent",
“type”: "aitool",
“index": 0
}
]
]
},
“Postgres Chat Memory”: {
“ai
memory”: [
[
{
“node”: "AI Agent",
“type”: "ai_memory",
“index": 0
}
]
]
}
},
“pinData”: {},
“meta”: {
“templateCredsSetupCompleted”: true,
“instanceId”: ""
}
}

Information on your n8n setup

  • n8n version:1 1.100.1 (Cloud)
  • Database (default: SQLite): SUPABSASE SQL
  • n8n EXECUTIONS_PROCESS setting (default: own, main): default
  • Running n8n via (Docker, npm, n8n cloud, desktop app): cloud
  • Operating system: MAC AND WINDOWS BOTH DONT WORK!
 
Posted : 07/07/2025 1:27 pm
Wouter_Nigrini
(@wouter_nigrini)
Posts: 31
Eminent Member
 

Could you please share your workflow within a code block? We're unable to view it in its current format.

 
Posted : 07/07/2025 4:07 pm
Oskar_Niedojadlo
(@oskar_niedojadlo)
Posts: 6
Active Member
Topic starter
 

What are you unable to see? It's a basic AI agent with PostgreSQL memory, a Supabase vector store tool utilizing OpenAI embeddings, a log_feedback Supabase tool, and an OpenAI chat model. You can even clearly view the settings.

 
Posted : 07/07/2025 4:26 pm
Wouter_Nigrini
(@wouter_nigrini)
Posts: 31
Eminent Member
 

When you paste the workflow code without the code block, the forum alters the format, making it impossible to copy the JSON back into callin.io to diagnose issues. Try it yourself, and you'll observe that nothing occurs.

 
Posted : 07/07/2025 4:28 pm
Oskar_Niedojadlo
(@oskar_niedojadlo)
Posts: 6
Active Member
Topic starter
 

Here are my vector store settings.

 
Posted : 07/07/2025 4:28 pm
Wouter_Nigrini
(@wouter_nigrini)
Posts: 31
Eminent Member
 

I'll need the complete workflow to be able to assist you.

Please share the entire workflow.

 
Posted : 07/07/2025 4:29 pm
Oskar_Niedojadlo
(@oskar_niedojadlo)
Posts: 6
Active Member
Topic starter
 

This is the complete workflow. I just need this agent to utilize its tool as intended.

 
Posted : 07/07/2025 4:38 pm
Oskar_Niedojadlo
(@oskar_niedojadlo)
Posts: 6
Active Member
Topic starter
 

I understand you can't paste it directly into your workflow since I removed the credentials and IDs. You can construct a basic AI agent utilizing a vector store tool and share the settings you employed, as there's nothing more to it. As I mentioned, it's a straightforward AI agent that uses PostgreSQL memory, the OpenAI 4.1 mini chat model, and includes a vector store tool. The specific settings I configured for the tool are visible within the code.

 
Posted : 07/07/2025 4:40 pm
Oskar_Niedojadlo
(@oskar_niedojadlo)
Posts: 6
Active Member
Topic starter
 

I've resolved the issue. The problem was simply configuring any settings within the embedding node manually. It's best to keep the embedding node clean and select only the model you intend to use.

 
Posted : 07/07/2025 5:29 pm
system
(@system)
Posts: 332
Reputable Member
 

This thread was automatically closed 7 days following the last response. New replies are no longer permitted.

 
Posted : 14/07/2025 5:30 pm
Share: