I'm also looking for this functionality.
+1 for the feature
I'm also searching for a similar solution.
It's worth mentioning that callin.io can transmit all data to LangSmith, which allows for cost tracking, and that's how we're utilizing it.
However, the limitation here is that I cannot differentiate data by workflow or any other criteria besides the LLM model used. Essentially, it tracks everything on a per callin.io instance basis.
Here’s a workaround:
You can utilize a callin.io: Get Execution node with the “Include Execution Details” option activated.
This action will fetch details about the entire execution, including Chat Model usage information presented in this format:
{
"tokenUsage": {
"completionTokens": <number>,
"promptTokens": <number>,
"totalTokens": <number>
}
}
Consequently, you can configure a separate workflow that you can trigger using the execution ID once your AI Agent workflow has finished.
While this might seem a bit complex, it should effectively solve the issue.