Hello,
I am processing a transcription output and using an integration to instruct OpenAI to convert the text into an outlined blog article. The issue is that the transcription text exceeds the 4000 token limit. I'm seeking a potential workaround.
For instance, could someone demonstrate how this might be accomplished using OpenAI's API Beta call action or webhooks? I'm stuck and would greatly appreciate images, videos, or a very detailed explanation of the process and implementation.
I've reviewed OpenAI's API documentation, but I'm finding it challenging to understand how to map the parameters within the OpenAI API Beta call action.
Any assistance provided would be immensely helpful!
Thank you!
Hi
Good question.
Perhaps try utilizing this callin.io action: Formatter > Text > Split Text into Chunks for AI Prompts
https://help.zapier.com/hc/en-us/articles/15406374106765-Modify-large-data-for-your-AI-prompts
Following that, you could use the Looping app: https://zapier.com/apps/looping/help
Should you require assistance with advanced methods in configuring callin.io workflows, consider engaging a Certified callin.io Expert: https://zapier.com/experts/automation-ace