This was a pretty heavy lift for us to get out which was why it took a while. In addition to writing new image processing routines, a vision encoder, and doing cross attention, we also ended up re-architecting the way the models get run by the scheduler. We'll have a technical blog post soon about all the stuff that ended up changing.
jjice 457 days ago [-]
Y'all did a fantastic job! This works great and to have it all right there inside of Ollama is a huge step for local model execution.
zozbot234 457 days ago [-]
How long until Vulkan Compute support is merged into ollama? There is an active pull request at https://github.com/ollama/ollama/pull/5059 but it seems to be stalled with no reviews.
exe34 457 days ago [-]
did you feed back into llama.cpp?
also, can it do grounding like cogvlm?
either way, great job!
Patrick_Devine 457 days ago [-]
It's difficult because we actually ditched a lot of the c++ code with this change and rewrote it in golang. Specifically server.cpp has been excised (which was deprecated by llama.cpp anyway), and the image processing routines are all written in go as well. We also bypassed clip.cpp and wrote our own routines for the image encoder/cross attention (using GGML).
The hope is to be able to get more multimodal models out soon. I'd like to see if we can get Pixtral and Qwen2.5-vl in relatively soon.
qrios 457 days ago [-]
> Specifically server.cpp has been excised (which was deprecated by llama.cpp anyway)
Is there any more specific info available about who (llama.cpp or Ollama) removed what, where? As far as I can see, the server is still part of llama.cpp.
And more generally: Is this the moment when Ollama and Llama part ways?
exe34 457 days ago [-]
that's cool thank you! no grounding then? I don't get the impression it's actually part of llama 3.2v but I thought it's worth checking with somebody who might have the experience!
Patrick_Devine 456 days ago [-]
I haven't looked at cogvlm, but if you mean doing bounding boxes w/ classification, I'd love to support models like that (like detectron2) in the future.
exe34 456 days ago [-]
I'm not sure what you mean by classification, but something like it, yes:
"what are the coordinates of the bounding box for the rubber duck in the image [img]"
>>> "[10,50,200,300]"
csomar 457 days ago [-]
Any info of when we will get the 11B and 90B models?
These are vision optimized, though? Or that doesn't make them perform less for coding tasks?
sgt101 457 days ago [-]
I tested the small model with a few images from Clevr. On first blush I am afraid it didn't do very well at all, it got object counts totally wrong and struggled to identify shapes and colours.
Still, it seems to understand what's in the images in general (cones and spheres and cubes), and the fact that it runs on my mac book at all is basically amazing.
EdwardKrayer 456 days ago [-]
My initial testing was with charts - I've been waiting on local vision models to be good enough to feed technical documents and my initial testing is looking very good. Example:
I've tried with some ppt images rather than Clevr ones and it does much better. It can count circles and triangles and differentiates between them quite well. It can recognise the colours of the objects as well.
I think that the faux 3d of clevr images is too much for the model, it's interesting because much smaller pre-transformer specialist models were very good at clevr.
o11c 457 days ago [-]
Did they fix multiline editing yet? Any interactive input that wraps across 3+ lines seems to become off-by-one when editing (but fine if you only append?), and this will be only more common with long filenames being added. And triple-quote breaks editing entirely.
How does this address the security concern of filenames being detected and read when not wanted?
ei23 457 days ago [-]
Is Qwen2VL supported too?
Its a great vision model, works in comfyui.
Llama3.2s vision seems to be super censored...
papruapap 457 days ago [-]
I thought llamacpp didn't support images yet, has that changed or ollama is using a different library for this?
SCLeo 457 days ago [-]
I believe they wrote their own image handling and did not contribute back to llama.cpp.
papruapap 453 days ago [-]
oh sad :(, hope they upstream it at some point.
zamderax 457 days ago [-]
Does anyone know if this will run on the iPhone 15 (6GB) or iPhone 16 (8GB)
also, can it do grounding like cogvlm?
either way, great job!
The hope is to be able to get more multimodal models out soon. I'd like to see if we can get Pixtral and Qwen2.5-vl in relatively soon.
Is there any more specific info available about who (llama.cpp or Ollama) removed what, where? As far as I can see, the server is still part of llama.cpp.
And more generally: Is this the moment when Ollama and Llama part ways?
"what are the coordinates of the bounding box for the rubber duck in the image [img]" >>> "[10,50,200,300]"
Ran the 11B yesterday and it worked great.
Still, it seems to understand what's in the images in general (cones and spheres and cubes), and the fact that it runs on my mac book at all is basically amazing.
https://i.imgur.com/1ETREP9.png
I think that the faux 3d of clevr images is too much for the model, it's interesting because much smaller pre-transformer specialist models were very good at clevr.
How does this address the security concern of filenames being detected and read when not wanted?