Great work! I still think that [1] does a better job of helping us understand how GPT and LLM work, but yours is funnier.
Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).
So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.
have little to do with this, but i have to say your project are indeed pretty cool! Consider adding some more UI?
ordinarily 8 hours ago [-]
It's genuinely a great introduction to LLMs. I built my own awhile ago based off Milton's Paradise Lost: https://www.wvrk.org/works/milton
mudkipdev 5 hours ago [-]
This is probably a consequence of the training data being fully lowercase:
You> hello
Guppy> hi. did you bring micro pellets.
You> HELLO
Guppy> i don't know what it means but it's mine.
functional_dev 4 hours ago [-]
Great find! It appears uppercase tokens are completely unknonw to the tokenizer.
But the character still comes through in response :)
hackerman70000 3 hours ago [-]
Finally an LLM that's honest about its world model. "The meaning of life is food" is arguably less wrong than what you get from models 10,000x larger
Duplicake 30 minutes ago [-]
I love this! Seems like it can't understand uppercase letters though
bblb 2 hours ago [-]
Could it be possible to train LLM only through the chat messages without any other data or input?
If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.
Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.
roetlich 2 minutes ago [-]
What does "done offline" mean? Otherwise you are limited by context window.
zwaps 6 hours ago [-]
I like the idea, just that the examples are reproduced from the training data set.
How does it handle unknown queries?
3 hours ago [-]
fawabc 46 minutes ago [-]
how did you generate the synthetic data?
cbdevidal 8 hours ago [-]
> you're my favorite big shape. my mouth are happy when you're here.
Laughed loudly :-D
vunderba 7 hours ago [-]
This is a direct output from the synthetic training data though - wonder if there is a bit of overfitting going on or it’s just a natural limitation of a much smaller model.
ananandreas 1 hours ago [-]
Great and simple way to bridge the gap between LLMs and users coming in to the field!
ben8bit 2 hours ago [-]
This is really great! I've been wanting to do something similar for a while.
ankitsanghi 6 hours ago [-]
Love it! I think it's important to understand how the tools we use (and will only increasingly use) work under the hood.
gdzie-jest-sol 2 hours ago [-]
* How creating dataset? I download it but it is commpresed in binary format.
* How training. In cloud or in my own dev
* How creating a gguf
gdzie-jest-sol 2 hours ago [-]
```
uv run python -m guppylm chat
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/home/user/gupik/guppylm/guppylm/__main__.py", line 48, in <module>
main()
File "/home/user/gupik/guppylm/guppylm/__main__.py", line 29, in main
engine = GuppyInference("checkpoints/best_model.pt", "data/tokenizer.json")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/gupik/guppylm/guppylm/inference.py", line 17, in __init__
self.tokenizer = Tokenizer.from_file(tokenizer_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Exception: No such file or directory (os error 2)
```
gdzie-jest-sol 1 hours ago [-]
meybe add training again (read best od fine) and train again
```
# after config device
checkpoint_path = "checkpoints/best_model.pt"
This is so cool! I'd love to see a write-up on how made it, and what you referenced because designing neural networks always feel like a maze ;)
kubrador 6 hours ago [-]
how's it handle longer context or does it start hallucinating after like 2 sentences? curious what the ceiling is before the 9M params
gnarlouse 8 hours ago [-]
I... wow, you made an LLM that can actually tell jokes?
murkt 4 hours ago [-]
With 9M params it just repeats the joke from a training dataset.
NyxVox 8 hours ago [-]
Hm, I can actually try the training on my GPU. One of the things I want to try next. Maybe a bit more complex than a fish :)
cpldcpu 3 hours ago [-]
Love it! Great idea for the dataset.
brcmthrowaway 5 hours ago [-]
Why are there so many dead comments from new accounts?
59nadir 2 hours ago [-]
Because despite what HN users seem to think, HN is a LLM-infested hellscape to the same degree as Reddit, if not more.
wiseowise 59 minutes ago [-]
You’re absolutely right! HN isn’t just LLM-infested hellscape, it’s a completely new paradigm of machine assisted chocolate-infused information generation.
toyg 46 minutes ago [-]
Just let me know which type of information goo you'd like me to generate, and I'll tailor the perfect one for you.
loveparade 4 hours ago [-]
It really seems it's mostly AI comments on this. Maybe this topic is attractive to all the bots.
AlecSchueler 5 hours ago [-]
They all seem to be slop comments.
SilentM68 9 hours ago [-]
Would have been funny if it were called "DORY" due to memory recall issues of the fish vs LLMs similar recall issues :)
monksy 5 hours ago [-]
Is this a reference from the Bobiverse?
rclkrtrzckr 4 hours ago [-]
I could fork it and create TrumpLM. Not a big leap, I suppose.
search_facility 3 hours ago [-]
probably 8M params are too much even :)
danparsonson 1 hours ago [-]
As long as you use the best parameters then it doesn't matter
wiseowise 58 minutes ago [-]
Grab her by the pointer.
AndrewKemendo 9 hours ago [-]
I love these kinds of educational implementations.
I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple
Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.
dvt 9 hours ago [-]
> the user is immediately able to understand the constraints
Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.
IMO we're a step before that: We don't even have a real fish involved, we have a character that is fictionally a fish.
In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."
Or, in this case, that it really enjoys food-pellets.
andoando 5 hours ago [-]
Id highly disagree with that. Were all living in the same shared universe, and underlying every intelligence must be precisely an understanding of events happening in this space-time.
7 hours ago [-]
AndrewKemendo 9 hours ago [-]
Different argument
I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)
Hence why it’s a “unintentional nod” not an instantiation
7 hours ago [-]
Elengal 1 hours ago [-]
Cool
nullbyte808 9 hours ago [-]
Adorable! Maybe a personality that speaks in emojis?
I think this is a nice project because it is end to end and serves its goal well. Good job! It's a good example how someone might do something similar for a specific purpose. There are other visualizers that explain different aspects of LLMs but this is a good applied example.
martmulx 8 hours ago [-]
How much training data did you end up needing for the fish personality to feel coherent? Curious what the minimum viable dataset looks like for something like this.
Rendered at 11:17:46 GMT+0000 (Coordinated Universal Time) with Vercel.
Then, some criticism. I probably don't get it, but I think the HN headline does your project a disservice. Your project does not demystify anything (see below) and it diverges from your project's claim, too. Furthermore, I think you claim too much on your github. "This project exists to show that training your own language model is not magic." and then just posts a few command line statements to execute. Yeah, running a mail server is not magic, just apt-get install exim4. So, code. Looking at train_guppylm.ipynb and, oh, it's PyTorch again. I'm better off reading [2] if I'm looking into that (I know, it is a published book, but I maintain my point).
So, in short, it does not help the initiated or the uninitiated. For the initiated it needs more detail for it to be useful, the uninitiated more context for it to be understood. Still a fun project, even if oversold.
[1] https://spreadsheets-are-all-you-need.ai/ [2] https://github.com/rasbt/LLMs-from-scratch
You> hello Guppy> hi. did you bring micro pellets.
You> HELLO Guppy> i don't know what it means but it's mine.
But the character still comes through in response :)
If Guppy doesn't know regular expressions yet, could I teach it to it just by conversation? It's a fish so it wouldn't probably understand much about my blabbing, but would be interesting to give it a try.
Or is there some hard architectural limit in the current LLM's, that the training needs to be done offline and with fairly large training set.
How does it handle unknown queries?
Laughed loudly :-D
* How training. In cloud or in my own dev
* How creating a gguf
Traceback (most recent call last):
Exception: No such file or directory (os error 2) `````` # after config device checkpoint_path = "checkpoints/best_model.pt"
ckpt = torch.load(checkpoint_path, map_location=device, weights_only=False)
model = GuppyLM(mc).to(device) if "model_state_dict" in ckpt: model.load_state_dict(ckpt["model_state_dict"]) else: model.load_state_dict(ckpt)
start_step = ckpt.get("step", 0) print(f"Encore {start_step}") ```
I want to really praise the (unintentional?) nod to Nagel, by limiting capabilities to representation of a fish, the user is immediately able to understand the constraints. It can only talk like a fish cause it’s very simple
Especially compared to public models, thats a really simple correspondence to grok intuitively (small LLM > only as verbose as a fish, larger LLM > more verbose) so kudos to the author for making that simple and fun.
Nagel's point was quite literally the opposite[1] of this, though. We can't understand what it must "be like to be a bat" because their mental model is so fundamentally different than ours. So using all the human language tokens in the world can't get us to truly understand what it's like to be a bat, or a guppy, or whatever. In fact, Nagel's point is arguably even stronger: there's no possible mental mapping between the experience of a bat and the experience of a human.
[1] https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf
In LLM-discussions, obviously-fictional characters can be useful for this, like if someone builds a "Chat with Count Dracula" app. To truly believe that a typical "AI" is some entity that "wants to be helpful" is just as mistaken as believing the same architecture creates an entity that "feels the dark thirst for the blood of the living."
Or, in this case, that it really enjoys food-pellets.
I’m not going to argue other than to say that you need to view the point from a third party perspective evaluating “fish” vs “more verbose thing,” such that the composition is the determinant of the complexity of interaction (which has a unique qualia per nagel)
Hence why it’s a “unintentional nod” not an instantiation
https://huggingface.co/datasets/arman-bd/guppylm-60k-generic