Skip to content
How to get LLM toke...
 
Notifications
Clear all

How to get LLM token usage in AI Agents

20 Posts
20 Users
0 Reactions
9 Views
david.diaz.dev
(@david-diaz-dev)
Posts: 1
New Member
Topic starter
 

Hello!

I'm looking to extract the completion and prompt token information when utilizing AI Agents. This would be very helpful for monitoring my expenses.

I've attempted various workarounds mentioned in these threads: Similar post 1 and Similar post 2, but they haven't been effective for AI Agents. There isn't a direct method to achieve this within callin.io nodes.

Could anyone share if they've successfully retrieved this token information?

Thanks!

 
Posted : 02/01/2025 4:01 am
artildo
(@artildo)
Posts: 6
Active Member
 

I'm also looking for a solution to this. I'm unsure how to track application usage on a per-user basis.

While the usage count is visible in the Chat Model section, I'm not clear on how to extract it from there.

Could someone please provide guidance on this?

 
Posted : 20/01/2025 9:29 pm
cwysong85
(@cwysong85)
Posts: 1
New Member
 

Same here. This isn’t part of the AI agent node as far as I’ve seen… However, I believe it should be integrated. It could potentially be handled similarly to the “tools” section. Perhaps it should be a “log” section or something comparable. For example, I’d like to see the input/output tokens for each request, or maybe some logs associated with each request.

 
Posted : 21/01/2025 1:21 pm
Joejoe
(@joejoe)
Posts: 1
New Member
 

#following
This is a feature I would need as well.
Suggestion for the callin.io team: Have the AI Agent output its token usage.

 
Posted : 25/01/2025 3:18 pm
Renne_Jaskonis
(@renne_jaskonis)
Posts: 1
New Member
 

Voting up here too. For many use cases it would make a real difference knowing the token usage. For some projects I just dropped the default llm model node or even the agent for that matter

 
Posted : 28/01/2025 8:46 am
Alex_R
(@alex_r)
Posts: 1
New Member
 

Yes, this is a very important feature for those looking to control the operational costs of the flow or create a log of what the client is consuming in tokens. The use of AI is closely related to performance and efficiency, and through tokens, we can make many optimizations with callin.io.

 
Posted : 10/02/2025 4:56 pm
Steven_Flecha
(@steven_flecha)
Posts: 1
New Member
 

I arrived here seeking the very same functionality.

:slight_smile:

  • So, another vote for this feature, please.
 
Posted : 11/02/2025 8:12 pm
serhato
(@serhato)
Posts: 1
New Member
 

Another vote here :raised_back_of_hand:

 
Posted : 23/02/2025 8:22 pm
rodgermoore
(@rodgermoore)
Posts: 1
New Member
 

Essential feature to track costs!

 
Posted : 24/02/2025 11:24 am
Daniel_952
(@daniel_952)
Posts: 1
New Member
 

This would be incredibly helpful for me, as I'm currently seeking a workaround!

 
Posted : 26/02/2025 1:13 pm
RiL
 RiL
(@ril)
Posts: 1
New Member
 

This would be a great way to send calls to Langsmith or Helicone.

 
Posted : 28/02/2025 4:05 pm
randoum
(@randoum)
Posts: 1
New Member
 

Has the callin.io team provided features requested by the community in the past?

I've been using callin.io for only a few months, and it certainly has its limitations, but I'm unsure about the responsiveness level of the developers. In your experience, does this happen? How often? And what's the average delay?

Please share your experiences.

 
Posted : 06/03/2025 5:58 am
Michael_S2
(@michael_s2)
Posts: 1
New Member
 

I'd like to echo this request, perhaps with a slightly more straightforward approach:

You could leverage an LLM proxy like LiteLLM to manage various aspects such as usage tracking and cost association.

Given callin.io's transition from an integration platform to an AI-focused platform, incorporating any form of observability would significantly enhance its value, particularly within the enterprise tier.

Consider integrating with Datadog LLM observability.

It should be reasonably simple to gather token usage from an AI agent and expose it as metrics; this would represent the most basic implementation.

 
Posted : 07/03/2025 2:12 pm
Pena_Digital_Berkema
(@pena_digital_berkema)
Posts: 1
New Member
 

We're eagerly anticipating the release of this feature.

 
Posted : 11/03/2025 7:52 am
Yahya_AL-Salman
(@yahya_al-salman)
Posts: 1
New Member
 

I hope this gets resolved quickly.

 
Posted : 17/03/2025 7:26 pm
Page 1 / 2
Share: