In my specific case, I'm using ChatGPT, so I'll use that terminology. It would be beneficial if this functionality worked for all AI API calls to different models.
Suggested upsell:
- Base Version: Provide these statistics within the UI. To maintain value, avoid making it too generic; ideally, it should still offer workflow-level analytics.
- Enterprise: Enable users to retrieve this data per execution and log it to their own database. Offer deeper granularity, breaking down costs for multiple API calls within a single workflow to the next layer.
It would help if there was a node for:
It's unclear if this should be a dedicated node, but updating all other nodes to include this information seems more challenging from a development perspective.
Alternatively, this could be integrated into the native callin.io UI as a valuable feature, potentially broken down by agents or even agent-to-node.
Tracking token usage on the last AI API call, for instance, with ChatGPT agent runs, we receive:
json
{
"usage": {
"prompt_tokens": 125,
"completion_tokens": 42,
"total_tokens": 167
}
}
My use case:
I will be making numerous different AI API calls with what we are building. I need to monitor costs and will eventually aim to optimize them.
Any resources to support this?
https://api.openai.com/v1/chat/completions
Are you willing to work on this?
It's not urgent, but it will become so once we experience higher usage. We are currently in development. If a solution like this isn't available by the time we have a live application, I would likely need to implement a workaround (e.g., using an HTTP request to OpenAI).
Great, thank you! You can delete this thread if necessary. This is very helpful for my analytics.