And here I was hoping that this was local inference :)
micw 1 hours ago [-]
Sure. Why purchase a H200 if you can go with an ESP32 ^^
__tnm 50 minutes ago [-]
haha well I got something ridiculous coming soon for zclaw that will kinda work on board.. will require the S3 variant tho, needs a little more memory. Training it later today.
2 hours ago [-]
peterisza 1 hours ago [-]
right, 888 kB would be impossible for local inference
however, it is really not that impressive for just a client
Dylan16807 59 minutes ago [-]
It's not completely impossible, depending on what your expectations are. That language model that was built out of redstone in minecraft had... looks like 5 million parameters. And it could do mostly coherent sentences.
bensyverson 17 minutes ago [-]
This is absolutely glorious. We used to talk about "smart devices" and IoT… I would be so curious to see what would happen if these connected devices had a bit more agency and communicative power. It's easy to imagine the downsides, and I don't want my email to be managed from an ESP23 device, but what else could this unlock?
yauneyz 19 minutes ago [-]
Genuinely curious - did you use a coding agent for most of this or does this level if performance take hand written code?
Is there a heartbeat alternative? I feel like this is the magic behind OpenClaw and what gives it the "self-driven" feel.
g947o 2 hours ago [-]
Serious question: why? What are the use cases and workflows?
eleventyseven 14 minutes ago [-]
The various *claws are just a pipe between LLM APIs and a bunch of other API/CLIs. Like you can have it listen via telegram or Whatsapp for a prompt you send. Like to generate some email or social post, which it sends to the LLM API. Get back a tool call that claw then makes to hit your email or social API. You could have it regularly poll for new emails or posts, generate a reply via some prompt, and send the reply.
The reason people were buying a separate Mac minis just to do open claw was 1) security, as it was all vibe coded, so needs to be sandboxed 2) relay iMessage and maybe 3) local inference but pretty slowly. If you don't need to relay iMessage, a raspberry pi could host it on its own device. So if all you need is the pipe, an ESP32 works.
grzracz 40 minutes ago [-]
I don't fully get it either. At least agents build stuff, claws just run around pretending to be alive?
milar 1 hours ago [-]
for fun!
johnea 2 hours ago [-]
I don't really need any assistance...
throwa356262 2 hours ago [-]
Me neither.
But I have 10-15 ESP32's just waiting for a useful project. Does HN have better suggestions?
And here I was hoping that this was local inference :)
however, it is really not that impressive for just a client
The reason people were buying a separate Mac minis just to do open claw was 1) security, as it was all vibe coded, so needs to be sandboxed 2) relay iMessage and maybe 3) local inference but pretty slowly. If you don't need to relay iMessage, a raspberry pi could host it on its own device. So if all you need is the pipe, an ESP32 works.
But I have 10-15 ESP32's just waiting for a useful project. Does HN have better suggestions?
a kid-pleaser at the very least