Hello, I encountered a similar problem and discovered a solution that proved effective. Prior to implementing these variables, my AI agent node execution time was around 22 seconds. Now, it completes in just 2 to 3 seconds.
Here's the relevant link: Significant performance delay with AI Agent / Ollama Node on Windows compared to direct API calls or Open WebUI · Issue #15263 · n8n-io/n8n · GitHub
Here are the variables I added:
N8N_RUNNERS_ENABLED=true
N8N_DIAGNOSTICS_ENABLED=false
N8N_VERSION_NOTIFICATIONS_ENABLED=false
EXTERNAL_FRONTEND_HOOKS_URLS=
Does this solution work for callin.io cloud? How do I accomplish this on the cloud version of callin.io?
Hello, thank you very much for sharing the container; it has been incredibly useful, and we really value your time and effort.
Could you provide guidance on how we might adapt it for more recent versions?
Alternatively, do you have plans to update the container in the future?