advice to OP - you hurt your own credibility posting on medium dot com. just blog on huggingface or substack or hashnode.
peakji 245 days ago [-]
I'm new here. Just curious, why avoid Medium? Is it a Hacker News thing, or did I miss something?
whatshisface 245 days ago [-]
Medium doesn't "hurt your credibility" nearly as much as revealing that one's arsenal of litmus tests is suffering from such a paucity of real knowledge that one bases it on the web design, but Medium has a lot of annoying popups. A lot of people like Substack better and they have a paid subscriber thing that works well.
(realistically speaking, experts tend to know less about the blog hosting ecosystem the more they know about their domain)
swyx 245 days ago [-]
its just a "tell" that you dont mind the poor reader experience and being associated with the rest of low quality slop that is on medium. many of us here have simply given up clicking on anything medium related
As someone without specific background in the subfield (I do embedded programming) – thanks for spelling out what people "in the know" seem to understand about o1's functioning!
Approaches like best of n sampling and majority voting are definitely feasible. But I don't recommend trying things related to CoT, as it might interfere with the internalized reasoning patterns.
nwnwhwje 245 days ago [-]
Silly question time.
Is this a fined tuned LLM, for example drop in replacement for Llama etc.
Or is it some algorithm on top of an LLM, doing some chain of reasoning?
peakji 245 days ago [-]
It is an LLM fine-tuned using a new type of dataset and RL reward. It's good at reasoning, but I would not recommend to replace Llama for general tasks.
Mr_Bees69 245 days ago [-]
Really hope this goes somewhere, o1 without openai's costs and restrictions would be sweet.
peakji 245 days ago [-]
The model can already answer some tricky questions that other models (including GPT-4o) have failed to address, achieving a +5.56 improvement on the GPQA-Diamond dataset. Unfortunately, it has not yet managed to reproduce inference-time scaling. I will continue to explore different approaches!
swyx 245 days ago [-]
not sure i understand the rsults. its based on qwen 32b which is 49.49, and your best model is 53.54. results havent shown that your approach adds significant value yet.
can you compare with just qwen 32b with CoT?
peakji 245 days ago [-]
The result for Qwen2.5-32B (49.49) is using CoT prompting. Only Steiner models do not use CoT prompting.
More importantly, I highly recommend to try these out firsthand (not only Steiner, but all reasoning models). You'll find that these reasoning models can solve many problems that other models with the same parameter size cannot handle. The existing benchmarks may not reflect this well, as I mentioned in the article:
"... automated evaluation benchmarks, which are primarily composed of multiple-choice questions and may not fully reflect the capabilities of reasoning models. During the training phase, reasoning models are encouraged to engage in open-ended exploration of problems, whereas multiple-choice questions operate under the premise that "the correct answer must be among the options." This makes it evident that verifying options one by one is a more efficient approach. In fact, existing large language models have, consciously or unconsciously, mastered this technique, regardless of whether special prompts are used. Ultimately, it is this misalignment between automated evaluation and genuine reasoning requirements that makes me believe it is essential to open-source the model for real human evaluation and feedback."
swyx 244 days ago [-]
thanks, congrats on shipping.
ActorNightly 245 days ago [-]
OpenAIs o1 isnt really going that far though. Its definitelly better in some areas, but not overall better.
Im wondering if we can abstract chain of thought further down into the computation levels to replace a lot of matrix multiply. Like smaller transformers with less parameters and more selection of which transformer to use through search.
Rendered at 16:13:03 GMT+0000 (Coordinated Universal Time) with Vercel.
I haven't personally used Ollama Modelfile, but I think it should be relatively easy to convert from GGUF?
ollama run hf.co/{username}/{repository}
Example: ollama run hf.co/peakji/steiner-32b-preview-gguf:Q4_K_M
Source: https://huggingface.co/docs/hub/en/ollama
(realistically speaking, experts tend to know less about the blog hosting ecosystem the more they know about their domain)
Is this a fined tuned LLM, for example drop in replacement for Llama etc.
Or is it some algorithm on top of an LLM, doing some chain of reasoning?
can you compare with just qwen 32b with CoT?
More importantly, I highly recommend to try these out firsthand (not only Steiner, but all reasoning models). You'll find that these reasoning models can solve many problems that other models with the same parameter size cannot handle. The existing benchmarks may not reflect this well, as I mentioned in the article:
"... automated evaluation benchmarks, which are primarily composed of multiple-choice questions and may not fully reflect the capabilities of reasoning models. During the training phase, reasoning models are encouraged to engage in open-ended exploration of problems, whereas multiple-choice questions operate under the premise that "the correct answer must be among the options." This makes it evident that verifying options one by one is a more efficient approach. In fact, existing large language models have, consciously or unconsciously, mastered this technique, regardless of whether special prompts are used. Ultimately, it is this misalignment between automated evaluation and genuine reasoning requirements that makes me believe it is essential to open-source the model for real human evaluation and feedback."
Im wondering if we can abstract chain of thought further down into the computation levels to replace a lot of matrix multiply. Like smaller transformers with less parameters and more selection of which transformer to use through search.