Skip to content
How to get LLM toke...
 
Notifications
Clear all

How to get LLM token usage in AI Agents

20 Posts
20 Users
0 Reactions
9 Views
silvanderwoerd
(@silvanderwoerd)
Posts: 2
New Member
 

I'm also looking for this functionality.

 
Posted : 18/03/2025 7:35 pm
philrox
(@philrox)
Posts: 1
New Member
 

+1 for the feature

 
Posted : 21/03/2025 9:58 am
solomon
(@solomon)
Posts: 78
Trusted Member
 

I'm also searching for a similar solution.

 
Posted : 30/03/2025 11:43 am
vkarbovnichy
(@vkarbovnichy)
Posts: 3
New Member
 

It's worth mentioning that callin.io can transmit all data to LangSmith, which allows for cost tracking, and that's how we're utilizing it.

However, the limitation here is that I cannot differentiate data by workflow or any other criteria besides the LLM model used. Essentially, it tracks everything on a per callin.io instance basis.

 
Posted : 03/04/2025 11:25 am
Antony_Eardrop
(@antony_eardrop)
Posts: 1
New Member
 

Here’s a workaround:

You can utilize a callin.io: Get Execution node with the “Include Execution Details” option activated.

This action will fetch details about the entire execution, including Chat Model usage information presented in this format:

{
  "tokenUsage": {
    "completionTokens": <number>,
    "promptTokens": <number>,
    "totalTokens": <number>
  }
}

Consequently, you can configure a separate workflow that you can trigger using the execution ID once your AI Agent workflow has finished.

While this might seem a bit complex, it should effectively solve the issue.

 
Posted : 14/04/2025 1:32 pm
Page 2 / 2
Share: