According to openrouter.ai it looks like StepFun 3.5 Flash is the most popular model at 3.5T tokens, vs GLM 5 Turbo at 2.5T tokens. Claude Sonnet is in 5th place with 1.05T tokens. Which isn't super suprising as StepFun is ~about 5% the price of Sonnet.
It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.
MaxikCZ 9 minutes ago [-]
Exactly. When I read the headline I thought: "Ofc it is, its free."
skysniper 11 seconds ago [-]
I should have clarified I didn't use the free version...
1 hours ago [-]
skysniper 1 hours ago [-]
the real surprising part to me is that, despite being the cheapest model on board, stepfun is often able to score high at pure performance. Other models at the same price range (e.g. kimi) fails to do that.
smallerize 1 hours ago [-]
It looks like Unsloth had trouble generating their dynamic quantized versions of this model, deleted the broken files, then never published an update.
I'm not aware of other AI labs that released base checkpoint for models in this size class. Qwen released some base models for 3.5, but the biggest one is the 35B checkpoint.
thanks for the info. before running the bench i only tried it in arena.ai type of tasks and it was not impressive. i didn't expect it to be that good at agentic tasks
dmazin 20 minutes ago [-]
why do half the comments here read like ai trying to boost some sort of scam?
skysniper 8 minutes ago [-]
lol
skysniper 31 minutes ago [-]
another thing from the bench I didn't expect: gemini 3.1 pro is very unreliable at using skills. sometimes it just reads the skill and decide to do nothing, while opus/sonnet 4.6 and gpt 5.4 never have this issue.
skysniper 2 hours ago [-]
I ran 300+ benchmarks across 15 models in OpenClaw and published two separate leaderboards: performance and cost-effectiveness.
The two boards look nothing alike. Top 3 performance: Claude Opus 4.6, GPT-5.4, Claude Sonnet 4.6. Top 3 cost-effectiveness: StepFun 3.5 Flash, Grok 4.1 Fast, MiniMax M2.7.
The most dramatic split: Claude Opus 4.6 is #1 on performance but #14 on cost-effectiveness. StepFun 3.5 Flash is #1 cost-effectiveness, #5 performance.
Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance.
Rankings use relative ordering only (not raw scores) fed into a grouped Plackett-Luce model with bootstrap CIs. Same principle as Chatbot Arena — absolute scores are noisy, but "A beat B" is reliable. Full methodology: https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn
I built this as part of OpenClaw Arena — submit any task, pick 2-5 models, a judge agent evaluates in a fresh VM. Public benchmarks are free.
johndough 1 minutes ago [-]
Could you add a column for time or number of tokens? Some models take forever because of their excessive reasoning chains.
refulgentis 2 hours ago [-]
Please don’t use AI to write comments, it cuts against HN guidelines.
1 hours ago [-]
skysniper 1 hours ago [-]
sorry didn't know that. Here is my hand writing tldr:
gemini is very unreliable at using skills, often just read skills and decide to do nothing.
stepfun leads cost-effectiveness leaderboard.
ranking really depends on tasks, better try your own task.
refulgentis 1 hours ago [-]
It’s too late once it’s happened. I was curious, then when I saw the site looked vibecoded and you’re commenting with AI, I decided to stop trying to reason through the discrepancies between what was claimed and what’s on the site (ex. 300 battles vs. only a handful in site data).
rat9988 50 minutes ago [-]
Too late for what? For you? maybe. There are many others that are okay with it and it doesn't disminish the quality of the work. Props to the author.
refulgentis 26 minutes ago [-]
> Too late for what? For you? maybe.
Maybe? :)
> There are many others that are okay with it
Correct.
> and it doesn't disminish the quality of the work.
It does affect incoming people hearing about the work.
I applaud your instinct to defend someone who put in effort. It's one of the most important things we can do.
Another important thing we can do for them is be honest about our own reactions. It's not sunshine and rainbows on its face, but, it is generous. Mostly because A) it takes time B) other people might see red and harangue you for it.
skysniper 1 hours ago [-]
all 300+ battle data are available at https://app.uniclaw.ai/arena/battles, every single battle is shown with raw conversional history, produced files, judge's verdict and final scores
refulgentis 25 minutes ago [-]
Thanks! Is the judge an LLM? There's lot of references to "just like LMArena", but LMArena is human evaluated?
skysniper 8 minutes ago [-]
> Is the judge an LLM?
Yes, judge is one of opus 4.6, gpt 5.4, gemini 3.1 pro (submitter can choose). Self judge (judge model is also one of the participants) is excluded when computing ranking.
> There's lot of references to "just like LMArena", but LMArena is human evaluated?
Yeah LMArena is human evaluated, but here i found it not practical to gather enough human evaluation data because the effort it take to compare the result is much higher:
- for code, judge needs to read through it to check code quality, and actually run it to see the output
- when producing a webpage or a document, judge needs to check the content and layout visually
- when anything goes wrong, judge needs to read the execution log to see whether partial credit shall be granted
if you look at the cost details of each battle (available at the bottom of battle detail page), judge typically cost more than any participant model.
if we evaluate with human, i would say each evaluation can easily take ~5-10 min
refulgentis 5 minutes ago [-]
Fair enough, yeah, agent evals are hard especially across N models :/
Thanks for replying btw, didn't mean any disrespect, good on you for not getting aggro about feedback
2 hours ago [-]
Rendered at 18:03:13 GMT+0000 (Coordinated Universal Time) with Vercel.
https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F
It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.
If you haven’t heard of it yet there’s some good discussion here: https://news.ycombinator.com/item?id=47069179
- https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base
- https://huggingface.co/stepfun-ai/Step-3.5-Flash-Base-Midtra...
I'm not aware of other AI labs that released base checkpoint for models in this size class. Qwen released some base models for 3.5, but the biggest one is the 35B checkpoint.
They also released the entire training pipeline:
- https://huggingface.co/datasets/stepfun-ai/Step-3.5-Flash-SF...
- https://github.com/stepfun-ai/SteptronOss
The two boards look nothing alike. Top 3 performance: Claude Opus 4.6, GPT-5.4, Claude Sonnet 4.6. Top 3 cost-effectiveness: StepFun 3.5 Flash, Grok 4.1 Fast, MiniMax M2.7.
The most dramatic split: Claude Opus 4.6 is #1 on performance but #14 on cost-effectiveness. StepFun 3.5 Flash is #1 cost-effectiveness, #5 performance.
Other surprises: GLM-5 Turbo, Xiaomi MiMo v2 Pro, and MiniMax M2.7 all outrank Gemini 3.1 Pro on performance.
Rankings use relative ordering only (not raw scores) fed into a grouped Plackett-Luce model with bootstrap CIs. Same principle as Chatbot Arena — absolute scores are noisy, but "A beat B" is reliable. Full methodology: https://app.uniclaw.ai/arena/leaderboard/methodology?via=hn
I built this as part of OpenClaw Arena — submit any task, pick 2-5 models, a judge agent evaluates in a fresh VM. Public benchmarks are free.
gemini is very unreliable at using skills, often just read skills and decide to do nothing.
stepfun leads cost-effectiveness leaderboard.
ranking really depends on tasks, better try your own task.
Maybe? :)
> There are many others that are okay with it
Correct.
> and it doesn't disminish the quality of the work.
It does affect incoming people hearing about the work.
I applaud your instinct to defend someone who put in effort. It's one of the most important things we can do.
Another important thing we can do for them is be honest about our own reactions. It's not sunshine and rainbows on its face, but, it is generous. Mostly because A) it takes time B) other people might see red and harangue you for it.
Yes, judge is one of opus 4.6, gpt 5.4, gemini 3.1 pro (submitter can choose). Self judge (judge model is also one of the participants) is excluded when computing ranking.
> There's lot of references to "just like LMArena", but LMArena is human evaluated?
Yeah LMArena is human evaluated, but here i found it not practical to gather enough human evaluation data because the effort it take to compare the result is much higher:
- for code, judge needs to read through it to check code quality, and actually run it to see the output
- when producing a webpage or a document, judge needs to check the content and layout visually
- when anything goes wrong, judge needs to read the execution log to see whether partial credit shall be granted
if you look at the cost details of each battle (available at the bottom of battle detail page), judge typically cost more than any participant model.
if we evaluate with human, i would say each evaluation can easily take ~5-10 min
Thanks for replying btw, didn't mean any disrespect, good on you for not getting aggro about feedback