Skip to content
Persistent "Bad Req...
 
Notifications
Clear all

Persistent "Bad Request: Must Provide Model Parameter" with OpenAI Embeddings (callin.io Cloud v1.98.1)

1 Posts
1 Users
0 Reactions
4 Views
mlanzlot
(@mlanzlot)
Posts: 1
New Member
Topic starter
 

I'm encountering a critical and persistent issue when trying to generate OpenAI embeddings using the HTTP Request node in callin.io Cloud. Despite thorough troubleshooting, the problem persists, and the error message seems to contradict the request body shown in callin.io's own error logs.

Environment:

  • callin.io Cloud Version: 1.98.1
  • Node in Question: HTTP Request (used for OpenAI Embeddings API call)
  • OpenAI Model: text-embedding-3-large

Problem Description: My workflow is designed to:

  1. Extract text and include metadata (documentID_clean, tags).
  2. Generate embeddings for this text via OpenAI’s /v1/embeddings endpoint.
  3. Store the embeddings and metadata in a Qdrant vector store.

The workflow consistently fails at the HTTP Request node responsible for calling the OpenAI embeddings API. The error received is: "Bad request - please check your parameters [item 0] you must provide a model parameter" or sometimes: "We could not parse the JSON body of your request. (HTTP) This likely means you aren't using your HTTP library correctly. The OpenAI API expects a JSON payload, but what was sent was not valid JSON."

Crucial Contradiction: The most puzzling aspect is that the Request field within the error details consistently displays a valid JSON body being sent, which includes the model parameter (e.g., "model": "text-embedding-3-large") and the input text. This directly contradicts the error message from OpenAI.

Detailed Troubleshooting Steps Performed (All have failed to resolve the issue):

  1. OpenAI API Key Validation:
  • Confirmed the OpenAI API key is valid and functions correctly in other callin.io workflows (e.g., successfully calling https://api.openai.com/v1/models using the same credential ). This rules out general API key or account issues.
  • Confirmed callin.io can establish a connection to OpenAI.
  1. JSON Body Preparation (Code Node):
  • Implemented a dedicated Code node (Prepare OpenAI Request Body (Code)) to construct the precise JSON payload required by OpenAI ({"model": "...", "input": "..."}).
  • This Code node then JSON.stringify()'s this object into a single string (requestBodyString).
  • Confirmed the output of this Code node is a valid, correctly escaped JSON string (e.g., "{"model":"text-embedding-3-large","input":"Your document text..."}"). This was verified by inspecting the node’s output and using external JSON validators.
  1. HTTP Request Node Configuration (Extensive Permutations):
  • Body Content Type: Raw, Raw Body Type: JSON: This configuration, with Body: {{$json.requestBodyString}} and Content-Type: application/json header, is the standard method. It previously allowed a “token limit exceeded” error (indicating the JSON was parsed), but now fails with the “model parameter missing” error after switching to text-embedding-3-large.
  • Body Content Type: JSON, Specify Body: Expression: Attempted JSON.parse($json.requestBodyString). This resulted in “JSON parameter needs to be valid JSON” errors, suggesting callin.io’s UI was not correctly evaluating the expression in this context.
  • Body Content Type: JSON, Specify Body: Using JSON: Attempted to build the JSON via callin.io’s form. This consistently failed, either stripping expressions or reporting invalid JSON.
  • Hardcoding the Body: As a final diagnostic, the stringified JSON output from the Code node was copied and pasted directly into the HTTP Request node’s Raw body field (replacing the expression). This also failed with the same “model parameter missing” error. This strongly indicates the issue is not with the JSON content itself or expression evaluation, but with the transmission.
  1. OpenAI Model:
  • Switched from text-embedding-ada-002 to text-embedding-3-large to address token limit issues. The error persists with the new model.
  1. Code Node for Direct API Call (Bypassing HTTP Request Node UI):
  • Attempted to make the OpenAI API call directly from a Code node using callin.io.getCredentials() and callin.io.httpRequest( ).
  • Encountered callin.io is not defined and Cannot find module errors due to environment-specific require paths and this context.
  • Even after correcting the Code node structure to use async execute(callin.io) and callin.io.getCredentials/callin.io.httpRequest, the node either returned nothing or failed with internal errors, suggesting the Code node environment itself might be problematic for direct HTTP calls in this version.

Conclusion: Given that the request body is demonstrably correct within callin.io’s own logs, the API key is valid and works elsewhere, and all standard troubleshooting methods (including direct hardcoding and attempting programmatic calls ) have failed, it strongly suggests a deeper, environmental issue. This could be a bug in callin.io Cloud (v1.98.1) concerning HTTP request transmission, or a specific interaction with OpenAI’s API not apparent from the user-facing configuration.

I am seeking urgent assistance to resolve this critical bottleneck in my workflow. Any insights or direct solutions would be greatly appreciated.

Thank you for your time and support.


 
Posted : 18/06/2025 2:54 pm
Share: