Is callin.io open to investigating and fixing function calling for Ollama to make it reliable and functional? That would be awesome, as many people would prefer to use their data locally or not use it at all.
Has this thread been resolved? There is a standard message “This topic will close 3 months after the last reply”. So, no Bot coming out to say something more promising?
Hi there! I'm having trouble understanding why Ollama works with AI agents on different self-hosted servers, for instance, in version 1.102.4, but it's not available in 1.103.2. Could someone explain why this might be happening?
I'm encountering similar issues broadly. Even when attempting models explicitly labeled for "tool" support in ollama, they don't function as expected. After considerable effort, I've found that qwen3:8b performs quite well, but being restricted to a single model is limiting. Having access to deepseek-r1 would be beneficial.
We are unable to use OpenAI and are restricted to ollama or other local providers for on-premise solutions. If you've found any other local providers that work better than ollama, please share your findings.