Skip to content
How to connect an H...
 
Notifications
Clear all

How to connect an HTTP request as an LLM

11 Posts
4 Users
0 Reactions
4 Views
naor117
(@naor117)
Posts: 5
Active Member
Topic starter
 

Hello,

I've searched the forum but couldn't find an answer to this topic.

I'm working within an offline network and need to connect a model via an HTTP gateway I've set up to establish communication with a language model.

Is this feasible? Currently, it appears I only have access to existing models, and an API is required for them (and again, I'm operating on an offline network).

Thank you very much.

This is an image of someone who asked a similar question.

Information on your callin.io setup

  • callin.io version: latest
  • Database (default: SQLite): SQLite
  • callin.io EXECUTIONS_PROCESS setting (default: own, main): own
  • Running callin.io via (Docker, npm, callin.io cloud, desktop app): Docker
  • Operating system: Windows
 
Posted : 19/03/2025 9:23 am
jksr
 jksr
(@jksr)
Posts: 6
Active Member
 

Hello - If your model is compatible with the OpenAI API Standard, you can simply utilize the OpenAI Model and adjust the base_url, model, and your API key. Please let me know if this approach works for you.

 
Posted : 19/03/2025 9:57 am
naor117
(@naor117)
Posts: 5
Active Member
Topic starter
 

Hi jksr, thank you very much for the quick response.

I don’t have an api_key because I’m working with a model that I uploaded to HTTP and I want to access it.

When I enter a fake api_key like XXXXXXXXXXXXXX and I enter the HTTP URL of my model (which works with HTTP triggers), I get “Couldn’t connect with these settings”

 
Posted : 19/03/2025 10:13 am
jksr
 jksr
(@jksr)
Posts: 6
Active Member
 

What do you mean by uploaded to HTTP? Is the model operating on your own server? By HTTP trigger, are you referring to the node?

If possible, I recommend configuring an API key; otherwise, it's essentially accessible to everyone.

 
Posted : 19/03/2025 10:21 am
naor117
(@naor117)
Posts: 5
Active Member
Topic starter
 

Hi, yes my model is hosted on a server I built, and when I create an HTTP node, I can successfully interact with it and receive a response.
I'm looking to connect this model to the LLM model or AI Agent node within callin.io.
Regarding the API key, it's not necessary in this case because the model resides on my private server, and only I initiate contact with it.
Is it feasible to link my model to the AI Agent or LLM MODEL node in callin.io?

 
Posted : 20/03/2025 7:04 am
jksr
 jksr
(@jksr)
Posts: 6
Active Member
 

This is achievable if your self-hosted model is compatible with the OpenAI API protocol. You can find more information here: Quickstart — vLLM

 
Posted : 20/03/2025 7:28 am
naor117
(@naor117)
Posts: 5
Active Member
Topic starter
 

My model body appears different from OpenAI's :frowning:

 
Posted : 20/03/2025 8:40 am
mdmiko
(@mdmiko)
Posts: 5
Active Member
 

Could someone please clarify?

Does your LLM function by sending an API request and receiving a response? If so, what is the purpose of an AI node?

Perhaps it would be simpler to construct a sub-workflow that includes an input parameter, a set node for the prompt (which could also be a parameter), and an output webhook to capture the reply?

 
Posted : 20/03/2025 9:55 am
naor117
(@naor117)
Posts: 5
Active Member
Topic starter
 

Yes, I created an http node, but I suppose there are advantages to creating a Chat model node because I believe the communication is more continuous there compared to a single http call.

 
Posted : 23/03/2025 12:10 pm
mdmiko
(@mdmiko)
Posts: 5
Active Member
 

I believe communication remains consistent. The AI node simply simplifies the configuration and integration of tools, API endpoints, models, and more. I'm currently utilizing a custom node for Google's experimental models, which require specific configurations for the request body, and it functions perfectly.

 
Posted : 23/03/2025 6:01 pm
system
(@system)
Posts: 332
Reputable Member
 

This thread was automatically closed 90 days following the last response. New replies are no longer permitted.

 
Posted : 21/06/2025 6:01 pm
Share: