Skip to content
Optimizing my scena...
 
Notifications
Clear all

Optimizing my scenario

5 Posts
5 Users
0 Reactions
4 Views
Nathan_Porterie
(@nathan_porterie)
Posts: 1
New Member
Topic starter
 

Hello everyone !

I'm currently building a scenario for a WhatsApp assistant, and it functions perfectly, with the exception of one significant detail: its speed. Each OpenAI API call takes 5-10 seconds, and since I have between 1 and 3 calls per execution, the entire scenario takes about 30 seconds, which isn't very dynamic.

Does anyone know if this can be adjusted? Ideally, I'd like the scenario to run much faster.

I'm happy to share more details about the scenario and would be willing to compensate anyone who can resolve this issue.

:slight_smile:

Have a great day!

 
Posted : 23/05/2025 10:25 pm
Pathfinder_Automate
(@pathfinder_automate)
Posts: 88
Trusted Member
 

Hello, welcome to the callin.io Hire a pro section! I'd be happy to collaborate with you on this. You can book a call Here to discuss the project requirements, and you can check out my Upwork profile Here.

 
Posted : 23/05/2025 10:47 pm
Probotic_Solutions
(@probotic_solutions)
Posts: 4
New Member
 

Hi there,

It's great to see your WhatsApp assistant project progressing!

I understand the delays you're experiencing with your OpenAI API calls. To improve speed, a good first step is to shorten your prompts, which should reduce processing time. Additionally, consider batching multiple API calls together to minimize wait times.

I'd be glad to look over your setup and offer a customized solution. If you need this resolved promptly, I'm available on weekends as well.

You can explore my services, connect on LinkedIn, or book a quick call to get started.

I'm looking forward to your reply and the chance to help you further.

 
Posted : 24/05/2025 4:44 pm
andres
(@andres)
Posts: 3
New Member
 

What is the business case?

What industry are you targeting with this application?

Without examining it more closely, two initial thoughts come to mind:

  • Consolidating multiple prompts into a single one.
  • Utilizing faster, albeit less accurate, models.

However, these approaches are only feasible after testing and within specific contexts.

 
Posted : 24/05/2025 7:13 pm
system
(@system)
Posts: 332
Reputable Member
 

This thread was automatically closed after 30 days. No further replies can be made.

 
Posted : 22/06/2025 10:26 pm
Share: