Skip to content
How to host LLM for...
 
Notifications
Clear all

How to host LLM for sensitive data elaboration?

2 Posts
2 Users
0 Reactions
6 Views
Simone_Paonessa
(@simone_paonessa)
Posts: 2
New Member
Topic starter
 

Hello everyone,

I'm looking to develop automations for processing personal data for my clients, such as an automated email responder. I'm curious if it's possible to train and host a Large Language Model (LLM) on a server for this data processing.

Ideally, each client would have their own secure, personalized LLM trained on their specific data. Is this a viable approach? Would I need to self-host a separate instance for each client, or are there more efficient ways to manage multiple clients? Additionally, what module would be most appropriate for this kind of setup?

I would greatly appreciate any insights or suggestions you might have.

 
Posted : 24/03/2025 10:58 pm
Stoyan_Vatov
(@stoyan_vatov)
Posts: 22
Eminent Member
 

Hey Simone,

Regarding the only callin.io related part of your question, you can set up an API on the server and either create a custom callin.io app to interact with it, or utilize the generic HTTP call module. Your choice will depend on the API's complexity and how you intend to connect.

As for the AI itself, it's certainly possible. I imagine you could implement it either way and it would function. One option is a single LLM that accesses individual clients’ databases to handle their requests. Alternatively, you could have one LLM per client. The latter approach would prevent any cross-contamination or mix-ups. However, the former approach offers much easier scalability, as you simply add a new dataset when onboarding a new client.

 
Posted : 25/03/2025 10:01 am
Share: