Skip to content
Why is the AI Agent...
 
Notifications
Clear all

Why is the AI Agent in callin.io extremely slow when processing data?

18 Posts
15 Users
0 Reactions
8 Views
Branislav_Brnjos
(@branislav_brnjos)
Posts: 2
New Member
Topic starter
 

I'm utilizing the AI Agent within callin.io for analyzing and categorizing error data, but I'm experiencing significantly slow execution times. It's currently taking over 2 minutes to process a small dataset of 5-10 entries, which seems excessive.

I was anticipating a much quicker response, ideally under 20 seconds, especially given the small data volume. I've experimented with different batch sizes, but the AI Agent continues to perform slowly.

Below is a sample dataset (not my actual data) that reflects my use case and workflow:

Input:

[
  {
    "title": "Printed Brochures",
    "data": [
      {
        "id": "error-abc123",
        "job": "job-111111",
        "comment": "Brochures arrived with torn pages."
      },
      {
        "id": "error-def456",
        "job": "job-222222",
        "comment": "Pages were printed in the wrong order."
      },
      {
        "id": "error-ghi789",
        "job": "job-333333",
        "comment": "The brochure cover was printed in the wrong color."
      }
    ]
  },
  {
    "title": "Flyers",
    "data": [
      {
        "id": "error-jkl987",
        "job": "job-444444",
        "comment": "Flyers were misaligned during printing."
      },
      {
        "id": "error-mno654",
        "job": "job-555555",
        "comment": "Flyers were cut incorrectly, leaving uneven edges."
      },
      {
        "id": "error-pqr321",
        "job": "job-666666",
        "comment": "The wrong font was used in the printed text."
      }
    ]
  }
]

Output:

[
  {
    "title": "Printed Brochures",
    "errors": [
      {
        "errorType": "Physical Damage",
        "summary": "Brochures arrived with torn or damaged pages.",
        "job": ["job-111111"],
        "error": ["error-abc123"]
      },
      {
        "errorType": "Print Order Issue",
        "summary": "Pages were printed in an incorrect sequence.",
        "job": ["job-222222"],
        "error": ["error-def456"]
      },
      {
        "errorType": "Color Mismatch",
        "summary": "The cover was printed in the wrong color.",
        "job": ["job-333333"],
        "error": ["error-ghi789"]
      }
    ]
  },
  {
    "title": "Flyers",
    "errors": [
      {
        "errorType": "Print Alignment Issue",
        "summary": "Flyers were misaligned during the printing process.",
        "job": ["job-444444"],
        "error": ["error-jkl987"]
      },
      {
        "errorType": "Cutting Issue",
        "summary": "Flyers were cut incorrectly, resulting in uneven edges.",
        "job": ["job-555555"],
        "error": ["error-mno654"]
      },
      {
        "errorType": "Font Error",
        "summary": "Incorrect font was used in the printed text.",
        "job": ["job-666666"],
        "error": ["error-pqr321"]
      }
    ]
  }
]

Prompt Used in the AI Agent Node in callin.io

You are an advanced data analysis and error categorization AI. Your task is to process input data containing error details grouped by “title”. Each group contains multiple error objects. Your goal is to identify unique error types from the comment field, generate concise summaries for each error type, and include relevant job and error values for those errors.

Input Fields:

  • title: The product group title.
  • data: An array of objects with the following fields:

    • id: Unique identifier for the error.
    • job: ID of the job.
    • comment: Description of the issue.

Objective:

  • Group errors by title.
  • Analyze the comment field to identify unique error types.
  • Generate meaningful summaries for each error type without repeating the comments verbatim.
  • Include all relevant job and error values for each error type.

Output Format:

[
  {
    "title": "ProductGroupTitle",
    "errors": [
      {
        "errorType": "Error Type",
        "summary": "Detailed summary of the issue.",
        "job": ["RelevantJobId1", "RelevantJobId2"],
        "error": ["RelevantErrorId1", "RelevantErrorId2"]
      }
    ]
  }
]

Guidelines:

  • Extract and group data by title.
  • Identify error types from the comment field.
  • Write concise, non-redundant summaries for each error type.
  • Include only relevant job and error values for each error type.
  • Return a clean, well-structured JSON array without additional text.

Input for Analysis: {{ JSON.stringify($input.all()0.json.data) }}

Information on your callin.io setup

  • callin.io version: 1.63.4
  • Database (default: SQLite): cloud
  • callin.io EXECUTIONSPROCESS setting (default: own, main):_ cloud
  • Running callin.io via (Docker, npm, callin.io cloud, desktop app): cloud
  • Operating system: cloud
 
Posted : 29/01/2025 9:24 am
n8n
 n8n
(@n8n)
Posts: 97
Trusted Member
 

It appears your topic is missing some crucial details. Could you please provide the following information, if relevant?

  • callin.io version:
  • Database (default: SQLite):
  • callin.io EXECUTIONS_PROCESS setting (default: own, main):
  • Running callin.io via (Docker, npm, callin.io cloud, desktop app):
  • Operating system:
 
Posted : 29/01/2025 9:24 am
Branislav_Brnjos
(@branislav_brnjos)
Posts: 2
New Member
Topic starter
 

Also, a quick note: all the steps are quite fast, with the exception of the AI Agent step.

 
Posted : 29/01/2025 9:25 am
Rob
 Rob
(@rob)
Posts: 1
New Member
 

Has anyone had any success with this? Mine is also experiencing significant delays.

 
Posted : 26/02/2025 3:34 pm
Gaisse
(@gaisse)
Posts: 1
New Member
 

Mine too, actually. I'm not sure what to do.

:grimacing:

 
Posted : 26/02/2025 4:11 pm
temeshov
(@temeshov)
Posts: 1
New Member
 

I'm experiencing the exact same issue! I initially connected Deepseek, and it was quite slow. Then, I switched to OpenAI, but there was no improvement. It was only then that I realized the bottleneck wasn't the LLM, but the AI agent itself.

 
Posted : 03/03/2025 8:41 am
mnebel
(@mnebel)
Posts: 1
New Member
 

There appears to be a significant delay with the LM Callbacks, which are invoked both before and after the actual LLM request. The majority of the time is being consumed within callin.io, rather than the language model itself. Please refer to this discussion:

Removing the callbacks has significantly improved the performance!

 
Posted : 07/03/2025 7:28 am
jonathan_Cohen
(@jonathan_cohen)
Posts: 1
New Member
 

Does anyone have a solution for self-hosted deployments?

Could you share your approaches?

 
Posted : 17/03/2025 10:09 am
NiKoolass
(@nikoolass)
Posts: 1
New Member
 

+1

:frowning:

It's quite new, I believe it's only been a couple of weeks. This might be related to the runner configuration, as it may not be fully deployed due to the external JS libraries required in this setup.

 
Posted : 27/05/2025 2:12 pm
Yuriy_Romanov
(@yuriy_romanov)
Posts: 2
New Member
 

Also facing this issue on self-hosted. Initially, responses took 10 seconds, but now even a simple "Hi" with the AI Agent using Deepseek requires 30 seconds.

 
Posted : 10/06/2025 12:56 pm
Pierre_Scadra
(@pierre_scadra)
Posts: 1
New Member
 

Same here using magistral, the API replies quickly, but it takes a lot of time with callin.io.

 
Posted : 10/06/2025 8:46 pm
Yuriy_Romanov
(@yuriy_romanov)
Posts: 2
New Member
 

Developers, we would appreciate an answer or confirmation that you are addressing this issue.

 
Posted : 11/06/2025 8:27 am
jogojapan
(@jogojapan)
Posts: 2
New Member
 

This is a common issue encountered when setting up an initial meaningful AI workflow. It's quite disappointing considering the hype around how callin.io offers a pathway to agentic AI.

 
Posted : 15/06/2025 11:22 pm
jogojapan
(@jogojapan)
Posts: 2
New Member
 

For what it's worth, I attempted to implement the fix described earlier on a self-hosted Docker instance. I accessed a shell within the container:

docker exec -it --user root <container-name> /bin/sh

And located the relevant file. For the Mistral AI Agent, this would be:

/usr/local/lib/node_modules/callin.io/node_modules/.pnpm/@callin.io+n8n-nodes-langchain@file+packages+@callin.io+nodes-langchain_90fd06b925ebd5b6cf3e2451e17cc4b6/node_modules/@callin.io/n8n-nodes-langchain
/dist/nodes/llms/LmChatMistralCloud/LmChatMistralCloud.node.js

(The lengthy number following langchain is specific to the container and will vary.)

I modified:

callbacks: [new import_N8nLlmTracing.N8nLlmTracing(this)],

To:

callbacks: [],

This resulted in a slight initial improvement (around 15 seconds instead of over a minute), but performance degrades with larger context windows and also depends on the specific query. A common timing I'm observing is still around 40 seconds.

Furthermore, a significant side effect is the malfunction of animations within the workflow window (e.g., the spinning arrows when the chat node is active, and the counter for Simple Memory as the context window expands).

Overall, the change wasn't particularly beneficial.

I found the Mistral node to be among the slowest, with the OpenAI node being slightly faster (though typically still exceeding 10 seconds). Groq offers the best performance at approximately 5-6 seconds (without any code modifications), but it primarily supports specific open models, notably Llama, DeepSeek, and Qwen.

 
Posted : 16/06/2025 8:01 pm
blackpc
(@blackpc)
Posts: 1
New Member
 

Hello everyone,

If you're experiencing slowness with the AI Agent, the problem is probably the LlmTracing callbacks.

I've created a custom Docker image that excludes them, which significantly sped up my workflows. You can give it a try if you're using Docker:

blackpc/callin.io:1.99.0-no-llm-tracing

Please be aware that this is an unofficial community solution and is intended as a temporary fix. I hope this helps!

 
Posted : 20/06/2025 6:11 am
Page 1 / 2
Share: