After many posts on my feed, I decided to give it a spin.
The good:
- Open source.
- Can run locally (Apple Silicon) at a fair speed.
- Image detection is good.
The bad:
- Not detecting tables.
- Text in a perfectly clean PDF (resume) is not detected.
I know its in preview, small and open source which is great, but its far from being usable.
-
daemonologist 2 days ago [-]
Well it's certainly small. Absolutely bombs my KTANE test though - poor character recognition, poor handling of even mildly complex tables, and prone to getting stuck in repetition loops. (Task was convert to docling, in the official HF space.)
That said, I'm definitely glad to see work in this area, particularly with open weights.
t1amat 1 days ago [-]
Regarding the repetition loops, I found that adding the end of turn token to the stop param was enough. Documentation mentions detecting this.
But your point about quality stands. Separately, this model emits the docling XML format, not the JSON format, so as far as I know today that means you are using the Python flavored docling only, the JS variant does not support this yet (afaik).
It would be interesting to see if fine tuning on your KTANE test improves your results?
dartos 2 days ago [-]
I may be missing something, but if you train a model to pass a specific test… isn’t it obvious that it would do better on that test?
I thought we called models with test data in their training set “poisoned”
sitkack 1 days ago [-]
That is how training is done. You don't train on all of your material or the tests are worthless.
dartos 1 days ago [-]
If you train on any test material, the tests are worthless.
th0ma5 2 days ago [-]
Does seem comparable to Tesseract? I feel like the accuracy results are still not significantly improved as a whole.
nanoxid 2 days ago [-]
OCR is not the task being solved here, though. This is supposed to help you when dealing with complex layouts where text is not just read left-to-right, top-to-bottom.
But I agree that accurate OCR is kind of a prerequisite for adaptation.
bugglebeetle 2 days ago [-]
What’s the best library for fine-tuning VLMs at the moment and do they support this architecture or that for the IBM Granite vision models? Document understanding tasks seem in special need of fine-tuning.
The good: - Open source.
- Can run locally (Apple Silicon) at a fair speed.
- Image detection is good.
The bad:
- Not detecting tables.
- Text in a perfectly clean PDF (resume) is not detected.
I know its in preview, small and open source which is great, but its far from being usable.
-
That said, I'm definitely glad to see work in this area, particularly with open weights.
But your point about quality stands. Separately, this model emits the docling XML format, not the JSON format, so as far as I know today that means you are using the Python flavored docling only, the JS variant does not support this yet (afaik).
It would be interesting to see if fine tuning on your KTANE test improves your results?
I thought we called models with test data in their training set “poisoned”
But I agree that accurate OCR is kind of a prerequisite for adaptation.
It was fine tuned from this: https://huggingface.co/HuggingFaceTB/SmolVLM-256M-Instruct
There's an example of fine tuning the base that would likely be applicable to this one as well.