Skip to content
Enforcing Company-W...
 
Notifications
Clear all

Enforcing Company-Wide LLM Proxy and Logging Prompts/Responses for All Users

1 Posts
1 Users
0 Reactions
4 Views
Umur_Kaygusuz
(@umur_kaygusuz)
Posts: 1
New Member
Topic starter
 

The concept is:

We are requesting a set of enterprise features to securely manage LLM usage (OpenAI, Azure, Claude) within callin.io.

My use case:

  • Enforce LLM Proxy via baseURL Override
    Mandate a global baseURL (e.g. https://llmproxy.company.com/v1 ) for all OpenAI/Azure credentials. Prevent bypass by restricting users from entering their own.
  • Credential Creation Restrictions
    Only permit administrators to create LLM credentials. Users should utilize centrally managed, proxy-enforced shared credentials.
  • Prompt & Response Logging
    Log the prompt and LLM response centrally for every AI call (similar to Lakera Guard or Lasso). Include user ID, workflow ID, and timestamp.
  • Force Security Node in Workflows
    Enable the enforcement that every workflow utilizing an LLM includes a predefined “Security Check” node prior to model interaction.

I believe adding this would be beneficial because:

These controls are crucial for enterprise-grade AI governance, particularly in environments where data loss prevention, compliance, and misuse detection are paramount. This would facilitate secure LLM adoption without depending on fragile workarounds.

Any resources to support this?

  • callin.io Enterprise Docs (RBAC, Credentials)

Are you willing to work on this?

Yes, I am open to contributing ideas, testing feedback, and design input.

 
Posted : 02/06/2025 11:48 am
Share: