-
Notifications
You must be signed in to change notification settings - Fork 331
Native ollama provider #272
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…e when feeding tool results into the llm.
I'm adding in visual model support right now. |
vision model support is in. I also added optional support to include an API key. While Ollama doesn't support api keys out the box, it's common for people (particularly Open WebUI users) to use nginx as a reverse proxy to expose Ollama on the internet using tools like cloudflare tunnel. |
I think I've addressed the issues about conversation history, streaming progress indicators, and handing multi-modal models. I've tested the vision functionality with llama4:latest and qwen2.5vl:72b-q8_0. Again, I'm very sorry for wasting your time with the earlier version that was missing so much functionality that you've already built into the parent model. I hope these changes are of a much better quality, and if there is more work that needs to be done, please tell me what I need to fix. Again, thank you so much for considering these changes. |
This is looking very cool indeed, thank you :) I've pushed an update to the smoke tests to include ollama.llama3.2:latest e2e test suite (not had a chance to do much diagnosis). The tool calling fails with this:
Now, I did remove a line for the linter which I think may have been the content that needed to be sent back (the linter said it was an unused variable, which would explain it!). The other test I quickly tried was for structured content. I don't know what options the Ollama API give you? I think for the OAI compatibility it offers the This is looking very good though! |
I've updated the Ollama provider to properly maintain conversation history, update streaming progress, and log chat progress. This should look more like the expected behavior.