I didn't expect IBM to be making relevant AI models but this thing is priced at $1 per 4,000,000 output tokens... I'm using it to transcribe handwritten input text and it works very well and super fast.
rubikscubeguy 74 days ago [-]
I'm the dev who made this:) We are looking into adding granite!
intalentive 74 days ago [-]
IBM and Nvidia speech to text models are also SOTA (according to HF leaderboard) and relatively lightweight. Replicate hosts those too, although some (like Parakeet) run easily on consumer GPU.
nicman23 74 days ago [-]
English only :( . it seems only 2 orders of magnitude larger models have support for ie greek :/
irjustin 74 days ago [-]
Thanks for this! Will test this model out because we do a lot of in between steps to get around the output token limits.
Super nice if it worked for our use case to simply get full output.
molf 74 days ago [-]
What is needed to evaluate OCR for most business applications (above everything else) is accuracy.
Some results look plausible but are just plain wrong. That is worse than useless.
Example: the "Table" sample document contains chemical substances and their properties. How many numbers did the LLM output and associate correctly? That is all that matters. There is no "preference" aspect that is relevant until the data is correct. Nicely formatted incorrect data is still incorrect.
I reviewed the output from Qwen3-VL-8B on this document. It mixes up the rows, resulting in many values associated with the wrong substance. I presume using its output for any real purpose would be incredibly dangerous. This model should not be used for such a purpose. There is no winning aspect to it. Does another model produce worse results? Then both models should be avoided at all costs.
Are there models available that are accurate enough for this purpose? I don't know. It is very time consuming to evaluate. This particular table seems pretty legible. A real production grade OCR solution should probably need a 100% score on this example before it can be adopted. The output of such a table is not something humans are good at reviewing. It is difficult to spot errors. It either needs to be entirely correct, or the OCR has failed completely.
I am confident we'll reach a point where a mix of traditional OCR and LLM models can produce correct and usable output. I would welcome a benchmark where (objective) correctness is rated separately from of the (subjective) output structure.
Edit: Just checked a few other models for errors on this example.
* GPT 5.1 is confused by the column labelled "C4" and mismatches the last 4 columns entirely. And almost all of the numbers in the last column are wrong.
* olmOCR 2 omits the single value in column "C4" from the table.
* Gemini 3 produces "1.001E-04" instead of "1.001E-11" as viscosity at T_max for Argon. Off by 7 orders of magnitude! There is zero ambiguity in the original table. On the second try it got it right. Which is interesting! I want to see this in a benchmark!
There might be more errors! I don't know, I'd like to see them!
fzysingularity 70 days ago [-]
This is why arenas are generally a bad idea for assessing correctness in visual tasks.
deaux 74 days ago [-]
I suggest you make explicit the assumption that this website is specifically about English text. Otherwise the leaderboard is pretty meaningless, with extreme differences in performance across other scripts - and potentially even languages such as Vietnamese or Czech which use Latin but have lots of accents.
rubikscubeguy 74 days ago [-]
Hey! I'm the dev who made this:) I think that you are right, data will bias towards english because we have a dataset that people can use that is in english. But you can also upload non-english docs into the battle mode as well as the playground!
skissane 74 days ago [-]
LMArena splits their leaderboard by language: maybe you should consider doing the same thing
I assume to do that you’d need another model to do language detection on the inputs and/or outputs; but a language detection model can be a lot cheaper than an OCR model or an LLM
hdjrudni 74 days ago [-]
That's unfortunate because I have a bunch of photos with handwritten German on the back that I need to transcribe, and seeing as that I can't read German I can't really do it by myself either.
deaux 74 days ago [-]
I reckon performance on German will be similar to English, the only real difference is the umlauts and those are very consistent. Not sure how it will do on the ß.
nicman23 74 days ago [-]
qwen 3.5 vl instruct on openrouter is damn cheap - and works quite well with non english stuff.
i have it verify some stamps which are quite messy and sometimes obscured and honestly some i could not even read.
maverwa 74 days ago [-]
from my first tests it does fine with german, at least for the gastly "handwritten" font the restaurant menu I used for the test uses.
cdrini 74 days ago [-]
There have been such a large number of OCR tools pop up over the past ~year; sorely in need for some benchmarks to compare them. Would love to see support for normal OCR tools like tesseract, EasyOCR, Microsoft Azure, etc. I'm using these for some projects, and my experiments with VLMs for OCR have resulted in too much hallucination for me to switch. Benchmarks comparing across this aisle would be incredibly useful.
daemonologist 74 days ago [-]
A limitation of this leaderboard approach that I want to point out is that while the large general-purpose LLMs can make greater leaps of inference (on handwriting and poor quality scans), and almost always produce better layouts and more coherent output, they can also sometimes be less correct. My experience is that they're more prone to skipping or transposing sections of text, or even hallucinating completely incorrect output, than the purpose-trained models. (A similar comparison can be made in turn to the character- or word-based OCR approaches like Tesseract, which are even less "intelligent" but also even less prone to those malbehaviors.)
Also, some of the models are prone to infinite loops and I suspect this is not being punished appropriately; the frontend seems to get into a bad state after around 50k characters, which prevents the user from selecting a winner. Probably would be beneficial to make sure every model has an output length limit.
Still, a really cool resource - I'm looking forward to more models being added.
rubikscubeguy 74 days ago [-]
Totally agree w/ your first point! For the looping, we just added a stop condition for now in battle mode, and you can still vote on the other model afterwards. A bit of a hard problem to solve. We will add more models!
codeddesign 74 days ago [-]
Most of these are general LLM’s and not specifically OCR models.
Where is Google Vision, Mistral, Paddle, Nanonets, or Chandra??
kbyatnal 74 days ago [-]
We wanted to keep the focus on (1) foundation VLMs and (2) open source OCR models.
We had Mistral previously but had to remove it because their hosted API for OCR was super unstable and returned a lot of garbage results unfortunately.
Paddle, Nanonets, and Chandra being added shortly!
timbmg 74 days ago [-]
MistralOCR works stably for me when first uploading the file to their server and then running the OCR. I also had some issues before when giving a URL directly to the OCR API, not sure if you're doing that?
rubikscubeguy 74 days ago [-]
nanonets is live now!
poulpy123 74 days ago [-]
I'm very impressed by the models, to the point I was wondering if they were really converting the pdf or just reading the content. I tried on documents in french, english and spanish, very heaving on graphics and with complex layouts (boardgame, flyer, book about rust), and I wasn't expecting anything great. Especially some models were showing symbols and smileys quite close from the original.
I noticed that some models were resisting better to faking data than other, especially I saw that in a sentence cut from the document, GPT5 was inventing the end of the sentence and opus was properly showing it cut.
I didn't try with my writing but in the playground there is one example and some models read it better than me.
I wish the output would show the confidence of the model on each part. I think it would help immensely.
Note that sometimes a model get stuck in a loop, preventing to vote and to see which model is which
est 74 days ago [-]
Offtopic, but what's the best OCR that can run offline on browsers with js/wasm with reasonable CPU/memory cost?
Working on a hobby project that interacts with user handwriting on <canvas>. Tried some CNN models for digits but had trouble with characters.
yorwba 74 days ago [-]
If the text is written interactively on the canvas (as opposed to extraction from pixels) this task is known as "online handwriting recognition" ("online" because you can watch the text being formed incrementally, which makes it easier to e.g. distinguish individual strokes.)
I don't know what the state of the art is, but an old model for digitizer pens might not do so bad either.
Note that I haven't tried any of them, but tesseract is still likely the leading open source OCR that works with CPU.
zzleeper 75 days ago [-]
Love this! Would have liked to see something like textract for a pre-LLM benchmark (but of course that's expensive), and also a distinction between handwritten text and printed one.
But still, this is incredibly useful!
tarruda 74 days ago [-]
Interesting that the 8B of the Qwen3-VL family 9th place, above a few proprietary models. This thing can run locally with llama.cpp on modest hardware.
ComputerGuru 74 days ago [-]
Two suggestions:
UX on mobile isn’t great. It wasn’t obvious to me where the second model output was and I was thrown off even more so because the option to vote for model 1 output was presented without ever even seeing model two output.
Second suggestion would be to install a MathJax plugin so one can properly rate mathematical equations and formulas. Raw LATeX is easy to mistake and it makes comparing between LATeX and Unicode outputs hard.
rubikscubeguy 74 days ago [-]
Hey! Dev who made this here. I hear you on the mobile UX, it's on my docket of things to fix. Same with math plugin! Thanks for the suggestions.
Really hope there is a layout mode or ocr with bbox mode, I want to see the model restore the whole page.
rubikscubeguy 74 days ago [-]
yeah, that would be a cool long term goal
ianhawes 75 days ago [-]
Please add Chandra by Datalab
ajmurmann 74 days ago [-]
Really like the idea. Unfortunately, my first upload is still spinning on one of the models about 5 minutes in. Clicking "Stop Battle" seems to do nothing either
rubikscubeguy 74 days ago [-]
Hey, I'm the dev who built this! Looking into it. Wondering if it's because of load due to this post.
densekernel 74 days ago [-]
Any plans to add Document Pre-trained transformer-2 (DPT-2) from https://landing.ai/?
coulix 74 days ago [-]
We need to see Landing.ai DPT-2, from my tests its the best in term of ability to extract structure from complex tables so far.
prodigycorp 74 days ago [-]
This needs a "both are bad" button. There are some generations where I cannot rightfully beats the other.
fzysingularity 75 days ago [-]
FYI one of the models on the battle was pretty slow to load. Are these also being rated on latency or just quality?
kbyatnal 74 days ago [-]
Ultimately, there’s some intersection of accuracy x cost x speed that’s ideal, which can be different per use case. We’ll surface all of those metrics shortly so that you can pick the best model for the job along those axes.
andrewlu0 74 days ago [-]
ideally we want people to rate based on quality - but i imagine some of the results are biased rn based on loading time
hdjrudni 74 days ago [-]
That's an easy fix if you wait for the slowest one and pop them both in at the same time, no?
mpercy123 74 days ago [-]
i don't think i'm you're target audience but i found it interesting to see the side-by-side comparisons from images with text in. it's pretty cool to see how different models interpret photos, too. cool tool, must've been fun to make.
timbmg 74 days ago [-]
Would be great to add MistralOCR!
dahateb 74 days ago [-]
I can second that, super cheap with 1$ per 1000 pages
krashidov 74 days ago [-]
I would be curious to see how Sonnet does. Their models are pretty solid when it comes to PDFs
I didn't expect IBM to be making relevant AI models but this thing is priced at $1 per 4,000,000 output tokens... I'm using it to transcribe handwritten input text and it works very well and super fast.
Super nice if it worked for our use case to simply get full output.
Some results look plausible but are just plain wrong. That is worse than useless.
Example: the "Table" sample document contains chemical substances and their properties. How many numbers did the LLM output and associate correctly? That is all that matters. There is no "preference" aspect that is relevant until the data is correct. Nicely formatted incorrect data is still incorrect.
I reviewed the output from Qwen3-VL-8B on this document. It mixes up the rows, resulting in many values associated with the wrong substance. I presume using its output for any real purpose would be incredibly dangerous. This model should not be used for such a purpose. There is no winning aspect to it. Does another model produce worse results? Then both models should be avoided at all costs.
Are there models available that are accurate enough for this purpose? I don't know. It is very time consuming to evaluate. This particular table seems pretty legible. A real production grade OCR solution should probably need a 100% score on this example before it can be adopted. The output of such a table is not something humans are good at reviewing. It is difficult to spot errors. It either needs to be entirely correct, or the OCR has failed completely.
I am confident we'll reach a point where a mix of traditional OCR and LLM models can produce correct and usable output. I would welcome a benchmark where (objective) correctness is rated separately from of the (subjective) output structure.
Edit: Just checked a few other models for errors on this example.
* GPT 5.1 is confused by the column labelled "C4" and mismatches the last 4 columns entirely. And almost all of the numbers in the last column are wrong.
* olmOCR 2 omits the single value in column "C4" from the table.
* Gemini 3 produces "1.001E-04" instead of "1.001E-11" as viscosity at T_max for Argon. Off by 7 orders of magnitude! There is zero ambiguity in the original table. On the second try it got it right. Which is interesting! I want to see this in a benchmark!
There might be more errors! I don't know, I'd like to see them!
I assume to do that you’d need another model to do language detection on the inputs and/or outputs; but a language detection model can be a lot cheaper than an OCR model or an LLM
i have it verify some stamps which are quite messy and sometimes obscured and honestly some i could not even read.
Also, some of the models are prone to infinite loops and I suspect this is not being punished appropriately; the frontend seems to get into a bad state after around 50k characters, which prevents the user from selecting a winner. Probably would be beneficial to make sure every model has an output length limit.
Still, a really cool resource - I'm looking forward to more models being added.
We had Mistral previously but had to remove it because their hosted API for OCR was super unstable and returned a lot of garbage results unfortunately.
Paddle, Nanonets, and Chandra being added shortly!
I noticed that some models were resisting better to faking data than other, especially I saw that in a sentence cut from the document, GPT5 was inventing the end of the sentence and opus was properly showing it cut.
I didn't try with my writing but in the playground there is one example and some models read it better than me.
I wish the output would show the confidence of the model on each part. I think it would help immensely.
Note that sometimes a model get stuck in a loop, preventing to vote and to see which model is which
Working on a hobby project that interacts with user handwriting on <canvas>. Tried some CNN models for digits but had trouble with characters.
I don't know what the state of the art is, but an old model for digitizer pens might not do so bad either.
Note that I haven't tried any of them, but tesseract is still likely the leading open source OCR that works with CPU.
But still, this is incredibly useful!
UX on mobile isn’t great. It wasn’t obvious to me where the second model output was and I was thrown off even more so because the option to vote for model 1 output was presented without ever even seeing model two output.
Second suggestion would be to install a MathJax plugin so one can properly rate mathematical equations and formulas. Raw LATeX is easy to mistake and it makes comparing between LATeX and Unicode outputs hard.
I’ve had great results locally. Albeit you need macOS >=13 for this.
Just this morning I came across HunyuanOCR which sounded very promising. https://huggingface.co/tencent/HunyuanOCR
[see https://news.ycombinator.com/item?id=45988611 for explanation]