Skip to content
Enable Prompt Cachi...
 
Notifications
Clear all

Enable Prompt Caching for AI Agent Models

1 Posts
1 Users
0 Reactions
3 Views
ktan99
(@ktan99)
Posts: 2
New Member
Topic starter
 

The suggestion is:

Implement a toggle button to enable/disable Prompt Caching for individual models.

My use case:

I utilize LLM models within the AI agent node. To conserve tokens, I use system prompts. The prompts being called are lengthy and consistently have the same prefix for the system prompt.

I think it would be beneficial to add this because:

The Gemini model supports both implicit and explicit caching. I would like the ability to explicitly set the caching behavior.

Claude also provides a parameter for setting prompt caching.

Any resources to support this?

Are you willing to work on this?

I'm not at an advanced level, but I can assist with testing or similar tasks.

 
Posted : 17/07/2025 3:19 pm
Share: