Hello everyone !
I'm currently building a scenario for a WhatsApp assistant, and it functions perfectly, with the exception of one significant detail: its speed. Each OpenAI API call takes 5-10 seconds, and since I have between 1 and 3 calls per execution, the entire scenario takes about 30 seconds, which isn't very dynamic.
Does anyone know if this can be adjusted? Ideally, I'd like the scenario to run much faster.
I'm happy to share more details about the scenario and would be willing to compensate anyone who can resolve this issue.
Have a great day!
Hi there,
It's great to see your WhatsApp assistant project progressing!
I understand the delays you're experiencing with your OpenAI API calls. To improve speed, a good first step is to shorten your prompts, which should reduce processing time. Additionally, consider batching multiple API calls together to minimize wait times.
I'd be glad to look over your setup and offer a customized solution. If you need this resolved promptly, I'm available on weekends as well.
You can explore my services, connect on LinkedIn, or book a quick call to get started.
I'm looking forward to your reply and the chance to help you further.
What is the business case?
What industry are you targeting with this application?
Without examining it more closely, two initial thoughts come to mind:
- Consolidating multiple prompts into a single one.
- Utilizing faster, albeit less accurate, models.
However, these approaches are only feasible after testing and within specific contexts.
This thread was automatically closed after 30 days. No further replies can be made.