Describe the issue/error/question
I am currently making an API call to retrieve all products from an online e-commerce store, which results in approximately 2,500 items. This API output lacks certain product details such as category, tax category, and producer name, instead providing IDs for these fields (e.g., product category id, tax category id, producer id). To obtain the corresponding names, I need to fetch them from Airtable using these IDs. Consequently, I am executing a custom Airtable API call (not the Airtable List module, as it only runs once per workflow execution, not for each item) to match the IDs with their respective names in Airtable.
Once I have all the necessary information from Airtable, the next step is to generate a CSV file containing all the product details.
However, due to the large volume of items processed in the workflow, the execution takes a considerable amount of time (around 30 minutes, which is acceptable). More critically, the workflow frequently crashes, or its status becomes "Unknown" (indicating "The workflow execution is probably still running but it may have crashed and callin.io cannot safely tell").
I'm uncertain if this issue stems from callin.io itself or the server hosting it.
I am currently throttling the Airtable API calls to 10 requests every 3 seconds, keeping in mind Airtable's API limit of 5 requests per second.
Given that we have a limited number of product categories (around 30) and tax categories (approximately 5-10), it seems inefficient to make individual Airtable API calls for all 2,500 items.
Note: Since category names might change and new categories could be added, I am avoiding hardcoding a switch or function module for this mapping.
I have also attempted to use loops with Airtable lists, but without success.
What is the error message (if any)?
Unknown: The workflow execution is probably still running but it may have crashed and callin.io cannot safely tell.
Please share the workflow
For this sample workflow, I am providing 20 items with a single field, product_category_id
. The workflow is designed to match these product_category_id
values with data in Airtable using an API call. In this sample data, there are only 6 distinct categories. The goal is to make the Airtable API call only 6 times, instead of for every item, and then associate the correct category name with each of the 20 items based on their product_category_id
. This way, when using a SET node or creating a CSV, all 20 items will include both the product_category_id
and its corresponding category name.
Additionally, in the actual scenario, I would need to perform separate API calls for each ID field (product category, tax ID, and producer ID) and then consolidate all the information to obtain the complete product details.
Share the output returned by the last node
Information on your callin.io setup
- callin.io version: 0.199.0
- Database you’re using (default: SQLite):
- Running callin.io with the execution process [own(default), main]:
- Running callin.io via [Docker, npm, n8n.cloud, desktop app]: Hosted on a server
Please provide the rewritten markdown content *it should be in the markdown format.
Hi, I am sorry to hear you’re having trouble.
Grouping items is tricky in callin.io as most nodes including the HTTP Request node will apply their logic to each individual item. So, I’d suggest avoiding this unless there is no other way to make a workflow work.
The status you have seen often suggests a workflow execution requires more memory than available. It’s worth checking the server logs for any additional indicators of that if you want to be sure about this one.
With this in mind, if your Item Lists node runs successfully you might want to consider splitting up your data into smaller batches afterwards using the Split in Batches node. Then, hand each batch over to a sub-workflow via the Execute Workflow node and make sure each sub-workflow only returns a very small dataset (something like { "success": true }
or even an empty item instead of the full response data).
This sub-workflow approach means that the memory required for each batch is freed again once each sub-workflow execution has finished.