Accurate and fast model, very happy with it so far!
dinakernel 1 hours ago [-]
My worry is that ASR will end up like OCR. If the multi modal large AI system is good enough (latency wise), the advantage of domain understanding eats the other technlogies alive.
In OCR, even when the characters are poorly scanned, the deep domain understanding these large multi modal AIs have allows it to understand what the document actually meant - this is going to be order id because in the million invoices I have seen before order id is normally below order date - etc. The same issue is going to be there in ASR also is my worry.
progbits 40 minutes ago [-]
This is both good and bad. Good ASR can often understand low quality / garbled speech that I could not figure out, but it also "over corrects" sometimes and replaces correct but low prior words with incorrect but much more common ones.
With OCR the risk is you get another xerox[1] incident where all your data looks plausible but is incorrect. Hope you kept the originals!
(This is why for my personal doc scans, I use OCR only for full text search, but retain the original raw scans forever)
Why are you 'worried' about it? Shouldn't we strive for better technology even if it means some will 'lose'?
yorwba 2 minutes ago [-]
"Better" isn't just about increasing benchmark numbers. Often, it's more important that a system fails safely than how often it fails. Automatic speech recognition that guesses when the input is unclear will occasionally be right and therefore have a lower word error rate, but if it's important that the output is correct, it might be better to insert "[unintelligible]" and have a human double-check.
gruez 39 minutes ago [-]
> Limitations
>Timestamps/Speaker diarization. The model does not feature either of these.
What a shame. Is whisperx still the best choice if you want timestamps/diarization?
bartman 27 minutes ago [-]
Even in the commercial space, there’s a lack of production grade ASR APIs that support diarization and word level timestamps.
My experiences with Google’s Chirp have been horrendous, with it sometimes skipping sections of speech entirely, hallucinating speech where the audio contains noise, and unreliable word level timestamps. And this all is even with using their new audio prefiltering feature.
AWS works slightly better, but also has trouble with keeping word level timestamps in sync.
Whisper is nice but hallucinates regularly.
OpenAI’s new transcription models are delivering accurate output but do not support word level timestamps…
A lot of this could be worked around by sending the resulting transcripts through a few layers of post processing, but… I just want to pay for an API that is reliable and saves me from doing all that work.
akreal 34 minutes ago [-]
WhisperX is not a model but a software package built around Whisper and some other models, including diarization and alignment ones. Something similar will be built around the Cohere Transcribe model, maybe even just an integration to WhisperX itself.
It doesn't use an extra model (so it supports every language that works with Whisper out of the box and use less memory), it works by applying Dynamic Time Warping to cross-attention weights.
teach 42 minutes ago [-]
Dumb question, but if this is "open source" is there source code somewhere? Or does that term mean something different in the world of models that must be trained to be useful?
I can't say enough nice things about Cohere's services. I migrated over to their embedding model a few months ago for clip-style embeddings and it's been fantastic.
It has the most crisp, steady P50 of any external service I've used in a long time.
bluegatty 52 minutes ago [-]
can u comment on overall quality? their models tend to be a bit smaller and less performant overall.
topazas 52 minutes ago [-]
How hard could it be to train other European language(-s)?
gunalx 35 minutes ago [-]
If you have to ask you dont really need the answer.
Seems to not be to difficult in finding or creating training code. So a pretty decent amount of high quality training data should be many hours. And a few hours in high end data enter GPU compute, and many iterations to get it right.
harvey9 31 minutes ago [-]
It includes several European languages.
stronglikedan 15 minutes ago [-]
hence "other" lol
simonw 1 hours ago [-]
It's great that this is Apache 2.0 licensed - several of Cohere's other models are licensed free for non-commercial use only.
aplomb1026 40 minutes ago [-]
[dead]
Rendered at 18:11:41 GMT+0000 (Coordinated Universal Time) with Vercel.
Accurate and fast model, very happy with it so far!
In OCR, even when the characters are poorly scanned, the deep domain understanding these large multi modal AIs have allows it to understand what the document actually meant - this is going to be order id because in the million invoices I have seen before order id is normally below order date - etc. The same issue is going to be there in ASR also is my worry.
With OCR the risk is you get another xerox[1] incident where all your data looks plausible but is incorrect. Hope you kept the originals!
(This is why for my personal doc scans, I use OCR only for full text search, but retain the original raw scans forever)
[1] https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...
>Timestamps/Speaker diarization. The model does not feature either of these.
What a shame. Is whisperx still the best choice if you want timestamps/diarization?
My experiences with Google’s Chirp have been horrendous, with it sometimes skipping sections of speech entirely, hallucinating speech where the audio contains noise, and unreliable word level timestamps. And this all is even with using their new audio prefiltering feature.
AWS works slightly better, but also has trouble with keeping word level timestamps in sync.
Whisper is nice but hallucinates regularly.
OpenAI’s new transcription models are delivering accurate output but do not support word level timestamps…
A lot of this could be worked around by sending the resulting transcripts through a few layers of post processing, but… I just want to pay for an API that is reliable and saves me from doing all that work.
It doesn't use an extra model (so it supports every language that works with Whisper out of the box and use less memory), it works by applying Dynamic Time Warping to cross-attention weights.
And someone has already converted it to onnx format: https://huggingface.co/eschmidbauer/cohere-transcribe-03-202... - so it can be run on CPU instead of GPU.
It has the most crisp, steady P50 of any external service I've used in a long time.
Seems to not be to difficult in finding or creating training code. So a pretty decent amount of high quality training data should be many hours. And a few hours in high end data enter GPU compute, and many iterations to get it right.