AI is going to make heroin look like a joke as more people integrate it into their lives. You're gonna have junkies doing some crazy shit just to get more AI credits.
mckn1ght 15 minutes ago [-]
Anyone who has actually known and dealt with heroin addicts can see that the only joke here is the hyperbolic outburst you’ve put on display.
A4ET8a8uTh0_v2 1 hours ago [-]
Step into a variant of a future, where claude is as important to the internet as aws, because constant interwebz rewrite happening in near real time had to be stopped for 4 hours causing incredible hacking spree as constant rewrites open/close/re-open various holes.
There is a part of me thinking that my initial thoughts on LLMs were not accurate ( like humanity's long term reaction to its impact ).
agumonkey 59 minutes ago [-]
Time to start a business with all the fired devs to act as interim-ai when claude flares.
rootnod3 14 minutes ago [-]
Claude Meat?
Meat Code?
Copyright is pending
Traubenfuchs 1 hours ago [-]
Just switch to gemini for the time being, assuming you did not fall for the trap of claude specific config.
Zero moat.
lxgr 52 minutes ago [-]
Claude Code (the agent scaffolding) only works with the Claude API, I think? That’s at least a bit of a moat.
derwiki 36 minutes ago [-]
cursor-cli, codex, aider should all be roughly drop in replacements that can use non-Anthropic models
I couldn't fix any of my UI quality-of-life bugs so I had to work on actual backend logic and distributed state consistency. Not what I wanted for an early morning coding sesh. Nightmare! /s
traceroute66 1 hours ago [-]
I think you forgot the word "AGAIN" from your title.
Have you seen their status page ? Every single month is littered with yellow and red.
For those of us old-school programmers it makes little difference, only the vibe coders throwing away $200 a month on Claude subs will be the ones crying !
rubicon33 1 hours ago [-]
I’m an “old school programmer” just like you, but still use Claud code.
For greenfield projects it’s absolutely faster to churn out code I’ve written 100 times in the past. I don’t need to write another RBAC system, I just don’t. I don’t need to write another table implementation for a frontend data view.
How Claud helps us is speed and breadth. I can do a lot more in shorter time, and depending on what your goals are this may or may not be valuable to you.
phyzome 44 minutes ago [-]
What kind of projects are you working on that aren't amenable to the sort of code reuse or abstraction that normally addresses this sort of "boilerplate"?
stingraycharles 29 minutes ago [-]
There are lots of projects like that, especially when doing work for external clients.
Very often they want to own all the code, so you cannot just abstract things in your own engine. It then very easily becomes the pragmatic choice to just use existing libraries and frameworks to implement these things when the client demands it.
Especially since every client wants different things.
At the same time, even though there are libraries available, it’s still work to stitch everything together.
For straightforward stuff, AI takes all that work out of your hands.
nine_k 14 minutes ago [-]
Writing boilerplate code is mostly creative copy-pasting.
If I were to do it, I would have most of the reusable code (e.g. of a RBAC system) written and documented once and kept unpublished. Then I would ask an AI tool to alter it, given a set of client-specific properties. It would be easier to review moderate changes to a familiar and proven piece of code. The result could be copied to the client-specific repo.
zwnow 33 minutes ago [-]
I was wondering about that as well, copy and paste has been a thing for a lot longer than LLMs...
DominoTree 53 minutes ago [-]
Trusting an AI to write an RBAC system feels like asking for trouble
infecto 5 minutes ago [-]
If you don’t have anything productive to add don’t say it.
I would put myself in the bridge between pre internet coders and the modern generation. I use these type of tools and don’t consider myself a vibe coder.
__warlord__ 1 hours ago [-]
you are absolutely right
NewsaHackO 24 minutes ago [-]
Wow, great insight– here's how Claude being down is effecting code production globally
1 hours ago [-]
skerit 22 minutes ago [-]
I noticed one single API error a few hours ago. Didn't seem to be down for long.
(I prefer the occasional downtime here and there versus Gemini's ridiculous usage limits)
chanux 47 minutes ago [-]
Anybody who has experience in running infra for ML/AI/Data pipeline systems, are they drastically different from regular infra?
stingraycharles 41 minutes ago [-]
Yes they are. They work vastly different in terms of hardware dependencies and data workflow.
Hardware dependencies: GPUs and TPUs and all that are not equal. You will have to have code and caches that only work with Google’s TPUs, and other codes and caches that only work with CUDA, etc.
Data workflow: you will have huge LLM models that need to be loaded at just the right time.
Oh wait, your model uses MoE? That means the 200GB model that’s split over 10 “experts” only needs maybe 20GB of that. So then it would be great if we could somehow pre-route a request to the right GPU that already has that specific model loaded.
But wait! This is a long conversation, and the cache was actually on a different server. Now we need to reload the cache on the new server that actually has this particular expert preloaded in its GPU.
etc.
it’s very different, mostly because it’s new tech and very expensive and cost optimizations are difficult but impactful.
hiq 2 minutes ago [-]
[delayed]
lxgr 55 minutes ago [-]
Should this be “the Claude API is down”, or is there a specific one used (only) by Claude Code?
fred_ 2 hours ago [-]
Based on the status page it should be back to operational.
Anthropic's API is not your only choice for a Claude Code workflow.
ralusek 46 minutes ago [-]
Comments are kind of embarrassing how many people seem to derive a sense of identity from not using AI. Before LLMs, I didn’t use them to code. Then there were LLMs, and I used them a little to code. Then they got better at code, and now I use them a little more.
Probably 20% of the code I produce is generated by LLMs, but all of the code I produce at this point is sanity checked by them. They’re insanely useful.
Zero of my identity is tied to how much of the code I write involves AI.
stingraycharles 37 minutes ago [-]
The irony is that by asserting how much you don’t identify your identify with AI, you, in turn, identify yourself in a certain way.
I’m reminded of that South Park episode with the goths. “I’m so much of a non-conformist I’m going to non-confirm with the non-conformists”.
In the end it all doesn’t matter.
saaaaaam 15 minutes ago [-]
I think you’ve put your finger on it. This isn’t about AI, it’s about the threat to people’s identity presented by AI. For a while now “writing code” has been a high status profession, with a certain amount of impenetrable mystique that “normies” can’t hurdle. AI has the potential to quite quickly shift “writing code” from a high status profession that people respect to commodity that those same normies can access.
For people whose identities and self of sense have been bolstered by being a member of that high status group AI is a big threat - not because of the impact on their work, but because of the potential to remove their status, and if their status slips away then they may realise they have nothing much else left.
When people feel threatened by new technology they shout loud and proud about how they don’t use it and everything is just fine. Quite often that becomes a new identity. Let them rail and rage against the storm.
“Blow winds, and crack your cheeks! Rage! Blow!”
The image of Lear “a poor,
infatuated, despised old man” seems curiously apt here.
golbez9 2 hours ago [-]
It's Joever
brianbest101 2 hours ago [-]
[dead]
nextworddev 2 hours ago [-]
By that I’m guessing the API is down since Claude code is just the harness?
throwpoaster 2 hours ago [-]
Yes.
Rendered at 16:19:55 GMT+0000 (Coordinated Universal Time) with Vercel.
Wait a minute. Did I bring Claude Code down?
https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...
There is a part of me thinking that my initial thoughts on LLMs were not accurate ( like humanity's long term reaction to its impact ).
Zero moat.
Have you seen their status page ? Every single month is littered with yellow and red.
For those of us old-school programmers it makes little difference, only the vibe coders throwing away $200 a month on Claude subs will be the ones crying !
For greenfield projects it’s absolutely faster to churn out code I’ve written 100 times in the past. I don’t need to write another RBAC system, I just don’t. I don’t need to write another table implementation for a frontend data view.
How Claud helps us is speed and breadth. I can do a lot more in shorter time, and depending on what your goals are this may or may not be valuable to you.
Very often they want to own all the code, so you cannot just abstract things in your own engine. It then very easily becomes the pragmatic choice to just use existing libraries and frameworks to implement these things when the client demands it.
Especially since every client wants different things.
At the same time, even though there are libraries available, it’s still work to stitch everything together.
For straightforward stuff, AI takes all that work out of your hands.
If I were to do it, I would have most of the reusable code (e.g. of a RBAC system) written and documented once and kept unpublished. Then I would ask an AI tool to alter it, given a set of client-specific properties. It would be easier to review moderate changes to a familiar and proven piece of code. The result could be copied to the client-specific repo.
I would put myself in the bridge between pre internet coders and the modern generation. I use these type of tools and don’t consider myself a vibe coder.
Hardware dependencies: GPUs and TPUs and all that are not equal. You will have to have code and caches that only work with Google’s TPUs, and other codes and caches that only work with CUDA, etc.
Data workflow: you will have huge LLM models that need to be loaded at just the right time.
Oh wait, your model uses MoE? That means the 200GB model that’s split over 10 “experts” only needs maybe 20GB of that. So then it would be great if we could somehow pre-route a request to the right GPU that already has that specific model loaded.
But wait! This is a long conversation, and the cache was actually on a different server. Now we need to reload the cache on the new server that actually has this particular expert preloaded in its GPU.
etc.
it’s very different, mostly because it’s new tech and very expensive and cost optimizations are difficult but impactful.
Anthropic's API is not your only choice for a Claude Code workflow.
Probably 20% of the code I produce is generated by LLMs, but all of the code I produce at this point is sanity checked by them. They’re insanely useful.
Zero of my identity is tied to how much of the code I write involves AI.
I’m reminded of that South Park episode with the goths. “I’m so much of a non-conformist I’m going to non-confirm with the non-conformists”.
In the end it all doesn’t matter.
For people whose identities and self of sense have been bolstered by being a member of that high status group AI is a big threat - not because of the impact on their work, but because of the potential to remove their status, and if their status slips away then they may realise they have nothing much else left.
When people feel threatened by new technology they shout loud and proud about how they don’t use it and everything is just fine. Quite often that becomes a new identity. Let them rail and rage against the storm.
“Blow winds, and crack your cheeks! Rage! Blow!”
The image of Lear “a poor, infatuated, despised old man” seems curiously apt here.