Describe the problem/error/question
When attempting to add Google Drive to download a file as a Tool for an AI Agent, no binary data is provided. The output is solely a JSON containing the fileId used for the download, with no binary payload.
I've attempted to force the download results into a field, but this did not resolve the issue.
This problem has been mentioned in several discussions without a definitive answer:
What is the error message (if any)?
No error messages were encountered.
Please share your workflow
Due to the private nature of the self-hosted workflow, I am unable to share it.
Share the output returned by the last node
Information on your n8n setup
- n8n version: 1.89.2
- Database (default: SQLite):
- n8n EXECUTIONS_PROCESS setting (default: own, main):
- Running n8n via (Docker, npm, n8n cloud, desktop app): portainer
- Operating system: macps
Hello, welcome
Could you share your workflow so we can see how we can help you?
To provide additional context for debugging, I configured a sub-workflow. This sub-workflow executes the file download from Google Drive and is intended to return the file data.
Within the sub-workflow execution, I can confirm that binary data is successfully downloaded and returned from Google Drive.
However, upon returning to the parent workflow and interacting with the AI agent, the binary file is not received. Consequently, it is not being passed to other tools or included as an execution result.
Still not resolved
Same problem here. I've been attempting to set up a multi-step sub-workflow as a tool for my AI agent to analyze WhatsApp chat export zip files, including all associated media. However, only the JSON fields are being passed back to the AI agent, not the binary data. This occurs even though I can see the binary data being output from the final node in the execution chain, but it seems to get lost when returning to the main workflow.
Edit: I've managed to find a workaround by essentially re-implementing tool calling using structured outputs. The workflow now checks if the AI agent is requesting that specific tool via the structured output fields. If it is, it runs the sub-workflow and then passes all the outputs, including binary data, back to the AI agent node. This approach works perfectly, suggesting there might be an issue with the current tool implementation.