Hello community,
I'm working on an AI agent that utilizes a substantial Supabase PGVector knowledge base. It frequently occurs that the AI agent doesn't engage the tool, in fact, it happens most of the time. I've instructed in the prompt to use the tool named "CompanyKnowledgeBase" (the Supabase Vector Store node), but I'm not getting any results.
Could this be due to my use of an Output parser? I'm encountering other issues with the output parser as well.
System prompt:
You are AI Coach and starting a test XYZ.
Explain what this test is about, searching the CompanyKnowledgeBase.
Then present EXACTLY the following first next statement and ask the user to give a rating from 1–5: ${$input.first().json.Item}
Follow the provided JSON schema for your response: {“type”:“object”,“properties”:{“output”:{“type”:“object”,“properties”:{“canProceedWithNextQuestion”:{“type”:“boolean”,“description”:“true in this case”},“previousScore”:{“type”:”number”,“description”:“null in this case”},“response”:{“type”:“string”,“description”:“Your response”}},
“required”:["canProceedWithNextQuestion","previousScore","response"],"additionalProperties":false}},"required":["output"],"additionalProperties":false}}
This is how the output parser schema looks like:
{
“canProceedWithNextQuestion”: “Whether we can proceed with the next question, if the last one has been answered with a score (true or false)”,
“previousScore”: “The user’s numeric answer (1–5) to the question, or null if invalid”,
“response”: “Your statement and next question or user query”
}
Can anyone offer some assistance?
Could you please update your question with your workflow so we can better understand the issue?
It might be a simple name mismatch, but if you're looking to enhance how you use your tools, consider using the MCP client/server for increased flexibility and more detailed tool descriptions.
To ensure your agent retrieves content from the vector store, utilize the Question and Answer node as demonstrated below. If your chatbot or agent performs tool calls, encapsulate this example within its own workflow and then invoke that workflow as a tool from your main agent.
I'm essentially trying to set up a quiz. Here are the scenarios I'm dealing with:
- Quiz Start: The AI Agent should deliver the initial greeting and the first question.
- Standard Flow: The AI Agent should respond to inquiries, record answers, and then present the subsequent question.
- Mid-Quiz Scoring: After a set number of questions (X), the AI Agent should provide a score.
- Final Scoring: Once all questions are completed, the AI Agent should present the final score and discuss the results.
Reason for Using Two Agents:
This approach is also driven by another challenge: the need for a "score interpretation" output. When I attempted to use a single agent for both functions, the output parser would sometimes generate interpretations when it wasn't supposed to. Additionally, the output parser wasn't functioning correctly, occasionally returning JSON data from the other agent.