Check out Ahmed Mukhtar’s video using Twilio and OpenAI’s realtime API. I also have a few templates available which might be helpful Handl...
Hello, This is a known limitation of the Vector Store Tool; you cannot modify this value directly within the agent node. However, it is indeed possi...
image1442×879 54.9 KB Getting some serious mad scientist vibes here!
Hello! I tried this myself and suspect the issue might be the llama 3.1 model, and potentially running it on less powerful hardware. My tests incl...
This might just be related to the model itself. This OpenAI forum post discusses a similar issue with o1 JSON responses. My personal recommendation ...
Hello! Some observations on your tool descriptions: Example 1: Call this tool to get context from a vector database that will assist in writi...
Thanks for the example. This is quite similar to what I used for testing, so I'm not entirely sure why our outcomes differ. Based on what you shared...
Interesting… I tested again with all 3 examples in this post and it does seem to be working for me at least. Perhaps you’ve uncovered a bug? If you ...
Yes, you will need to update to release 1.50.0 or later, as the search filter option was introduced in the Qdrant vector store node. image637×647 19...
You might want to explore the “chat memory manager” and “window buffer memory” (found under Advanced AI > Other AI nodes > Miscellaneous > Ch...
Great point! For most use cases, I wouldn't be concerned about the API limits. Are you encountering quota issues with your current workflow? Rega...
Yes, I can confirm this is functioning correctly Cheers and much appreciation to the callin.io team for making this a reality! Just a few quick...
Technically, yes. You can find a list of compatible vector stores on the callin.io website here: Vector stores | 🦜️🔗 Langchain....