Hello callin.io Community,
I’m currently implementing a high-level smart automation solution called NextGen Varejo: Control and Intelligence, designed for Brazilian retailers like supermarkets and pharmacies. The platform integrates directly with the ERP system and is powered by multiple specialized AI Agents (Inventory, Finance, Marketing, Customer, Social Media), all orchestrated by a central module called the MCP – Model Context Protocol.
At the heart of the AI engine, we use Vertex AI (Gemini) from Google Cloud for natural language understanding and generation, embedded directly into “Entry Agents” – smart gateways that route contextual requests to the appropriate AI agents. All of this is visually orchestrated in callin.io self-hosted (v1.101.2).
The Problem
When using the Google Gemini
and AI Agent
nodes from the @n8n/n8n-nodes-langchain
package, we encounter this error:
"Cannot read properties of undefined (reading 'Symbol(Symbol.asyncIterator)')"
This happens when the Gemini model attempts to return a streaming response using the streamGenerateContent
method – which appears to be improperly handled by the current version of the node.
What We’ve Verified
- Authentication with Vertex AI via Service Account is fully working;
- The Gemini model is active in the
us-central1
region; - The Google API is being called successfully (this is not a Google Cloud error);
- The correct endpoint (
generateContent
orstreamGenerateContent
) is being used; - The error occurs only when the callin.io node tries to process the streaming response.
Workarounds in Progress
- We’re building a dedicated sub-workflow using the HTTP Request node, manually calling the Gemini REST API as a fallback;
- This sub-workflow will be connected via
Execute Workflow
to various AI Agents, until the native node handles streaming properly.
Why This Matters
This is part of a live retail automation system aimed at optimizing stock, marketing, communication and pricing decisions. Vertex AI is essential to generate real-time insights, customer content and campaign ideas. For this to work in production, reliable streaming response support is crucial.
If anyone in the community has experienced this issue or found a more efficient workaround (without losing the benefits of streaming), your help would be greatly appreciated!
Thanks in advance to the amazing callin.io community – your platform is incredibly powerful when paired with Google Cloud.
Best regards,
Elton Nunes dos Santos
NextGen Varejo | Belo Horizonte, Brazil
LinkedIn
I'm encountering the same problem. Has there been any progress on this?
I've encountered a similar problem. I'm guessing you have a Structured Output Parser node linked to your Basic LLM Chain node, with Auto-Fix activated.
The solution was quite straightforward for me: remove the Structured Output Parser node, create a new one, and connect it to the Basic LLM chain using the same structured JSON. This resolved the issue for me. ¯_(ツ)_/¯