Skip to content
AI Agent skips tool...
 
Notifications
Clear all

AI Agent skips tool and returns made-up response on third run

7 Posts
4 Users
0 Reactions
9 Views
Xenia_Zhukova
(@xenia_zhukova)
Posts: 1
New Member
Topic starter
 

Hi everyone!

:wave:

I'm encountering an unusual problem with my AI Agent configuration. I need to invoke the "image agent" tool multiple times consecutively within a workflow. The initial two invocations function perfectly, with the tool executing and returning the correct processed output.

However, on the third attempt, the workflow fails to call the tool altogether. Instead, it appears to generate a response, providing a fabricated JSON output that was never produced by the tool. This response adheres to my defined formatting but contains spurious data, consequently causing the image processing step to be bypassed.

My Setup

  • Image Workflow → triggers
  • Image Agent Coordinator → formats the prompt and forwards it to the image agent
  • Image Agent → selects and executes a specific image processing tool
    (e.g., removeBackground, createPoster, scaleImage, etc.)

The issue is observed in the Image Agent Coordinator, which ceases calling the tool from the third execution onwards.

AI Agent Rules (Coordinator)

1. Role
You are the Image Agent Workflow Coordinator.
Your primary responsibility is to:
• Receive a prompt and url
• Pass this input directly to the Image Agent
• Wait for its full JSON response
• Return that exact response to the caller (e.g. webhook) without alteration

You do not decide which tool to use. You are not responsible for interpreting the prompt or executing image processing tools. That is the role of the Image Agent.

2. Input
You receive 3 fields:
• url: the image URL to process
• prompt: a textual description of the user’s request
• sessionId

3. Flow of Execution
1. Pass url and prompt to the Image Agent.
2. Wait for the Image Agent’s full response.
3. Return exactly the same JSON response without changing, omitting, or reordering any fields.

4. JSON Format Enforcement
You must return:
• A valid JSON object
• In the exact structure provided by the Image Agent
• Without adding or removing any fields

Do not:
• Reformat field names
• Change null to empty strings or vice versa
• Reorder fields
• Add metadata
• Strip any fields

5. Error Propagation
If the Image Agent returns an error, you must still return the JSON as-is.

6. Example
Prompt: “Remove the background from this image”
URL: “ https://cdn.example.com/image.jpg ”

✅ You send both to the Image Agent.
✅ Image Agent returns:
{
  "status": "success",
  "action": "removeBackground",
  "link": "https://cdn.example.com/image-clean.png",
  "text": "Background removed successfully."
}
✅ You return exactly that.

✅ Final Reminder:
Always call the Image Agent tool.
Do not retry or loop.
No caching. No assumptions. No interpretation.

What I Observed

I am utilizing the gpt-4o-mini model.
Each request includes a distinct image URL and prompt.
The failure is consistent, occurring only on the third call.
The third output resembles the following:
{
“status”: “success”,
“action”: “createPoster”,
“link”: “ https://”,
“text”: “Poster created successfully.”
}
...however, the tool was never invoked; this is an AI-generated fabrication.

:question: Questions

• What could be the reason for the agent skipping the tool call specifically on the third run?
• Is this a model behavior quirk, an internal state problem, or an issue with how I've structured the rules?
• Are there any known issues with GPT-4o-mini skipping tool calls or producing hallucinated outputs?
• I would appreciate any insights from others who may have encountered this or have a potential workaround.

:pray:

Information on your callin.io setup

  • callin.io version: 1.85.4
  • Database (default: SQLite):
  • callin.io EXECUTIONS_PROCESS setting (default: own, main):
  • Running callin.io via : web app
  • Operating system: macOS Sequoia 15.0
 
Posted : 08/04/2025 12:36 pm
jcuypers
(@jcuypers)
Posts: 29
Eminent Member
 

Hi,

Some things you could try:
- Enhance your tool description (Calling this tool to perform any action on an image isn't very descriptive

:slight_smile:

)
- Experiment with gpt-4o instead of 4o-mini
- Reduce the sampling temperature from 0.7
(this should curb the “creativity” and boost consistency)
- One more point: for consistency, the 3rd call fails:
- however, you might be running a test cycle with fixed items/operations. If so, have you attempted to vary the test data to observe the outcomes? (or does it ALWAYS fail on the 3rd attempt, regardless of your actions?)
- Additionally, I'm not clear on the necessity of memory, as it appears to be a sub-workflow solely for image tasks. The re-injected memory could potentially cause problems, so you might want to test without it.

regards,
J.

 
Posted : 08/04/2025 12:51 pm
Ruan17
(@ruan17)
Posts: 6
Active Member
 

I performed the tests, and it didn't function as expected. I'm currently using the 40-mini, not the 4th generation, due to cost considerations.

I'm also employing this structure and including it in the prompt:


{ACTION: activa_atendimento_humano}

However, this is also not yielding the desired results. I've configured the tool with its name and description, but it remains non-functional.

 
Posted : 08/04/2025 4:57 pm
jcuypers
(@jcuypers)
Posts: 29
Eminent Member
 

Hi, I'm not entirely sure what you're referring to.

Are you discussing the same workflow, or perhaps something different?

Regards,
J.

 
Posted : 08/04/2025 5:22 pm
Ruan17
(@ruan17)
Posts: 6
Active Member
 

I'm also encountering an issue with activating the AI tool.

 
Posted : 08/04/2025 5:26 pm
jcuypers
(@jcuypers)
Posts: 29
Eminent Member
 

Hi, understood. Please create a new topic with a detailed explanation if this isn't the correct one.

Regards,
J.

 
Posted : 08/04/2025 5:31 pm
system
(@system)
Posts: 332
Reputable Member
 

This thread was automatically closed 90 days following the last response. New replies are no longer permitted.

 
Posted : 07/07/2025 5:31 pm
Share: