This is what I’ve been using for non-confidential projects for about a week now (soon after v4 came out). I honestly can’t tell the difference, but I’m not doing anything crazy with it either.
Worth noting that I don’t think DeepSeek‘s API lets you opt out of training. Once this is up on other providers though… (OpenRouter is just proxying to DeepSeek atm)
maxgashkov 8 minutes ago [-]
As of now, OpenRouter offers multiple providers for DeepSeek with ZDR (not sure if they respect it but still).
tariky 4 hours ago [-]
I wanted to try this. To bring back opus and sonnet do I just reset those env's?
ianmurrays 3 hours ago [-]
Correct.
varenc 6 hours ago [-]
The more interesting part of deepclaude is the local proxy it runs to switch models mid-session and do combined cost tracking. Though these features seem quite buried in the LLM-generated readme. Looking at the history, it appears they were added later, and the readme wasn't restructured to highlight this.
How come such slop is allowed here, what value do these vibe coded zero shot "projects" add? Why not just post the prompt?
woctordho 2 hours ago [-]
For the same reason that GitHub has a releases page for uploading binaries.
fragmede 5 hours ago [-]
Convenience? Am I supposed to take the prompt and use my own tokens on it? Why should I have to do that?
otabdeveloper4 5 hours ago [-]
Recruiters used to use the candidate's Github "sources" page for evaluating candidates as a kind of proof-of-work.
groestl 5 hours ago [-]
And recruiter agents still do.
aaurelions 10 hours ago [-]
It seems like any project that makes fun of Claude is bound to reach the top spot on Hacker News. Even if it’s just a project consisting of four lines of code.
oblio 28 minutes ago [-]
You're just mean. I count 6 lines of code!
10 hours ago [-]
ihsw 10 hours ago [-]
[dead]
spirit23 7 hours ago [-]
So I created https://getaivo.dev, one can use model in the coding agent directly. Just `aivo claude -m deepseek-v4-pro`
Tanxsinxlnx 2 hours ago [-]
does it support aws bedrock provider support,does i can use any model in this
spirit23 2 hours ago [-]
Currently no, but it can be added
btbuildem 9 hours ago [-]
This in essence is what allows one to use any model with CC -- including local.
niobe 3 hours ago [-]
thanks, that was super easy.
I have been wanting to try CC with different models since Opus went downhill last month..
What limitations or issues have you noticed when using DeepSeek with Claude Code if any?
nadermx 10 hours ago [-]
The AI wars have begun
heisenbit 4 hours ago [-]
And they are enticing human agents to further their agendas using techniques learned from the white mice.
stingraycharles 8 hours ago [-]
This has been possible since the beginning.
5 hours ago [-]
faangguyindia 7 hours ago [-]
those who use deepseek v4, what level of output you get? Codex 5.3 or GPT 5.4?
is flash version on level of gpt 5.4 mini
adonese 4 hours ago [-]
I tried it on a non trivial, but also well documented and self contained task. It did amazingly well. I used deepseek v4 pro via deepseek platform. The model is very fast and also it is super cheap. I burned only 0.06 USD (I reckon how the same task would have cost me had I used e.g., amp).
PS. mentioning amp because i used to use it and I pay directly for token. I topped up 5 usd so I will be going to use it and see how far can it take me. But my impression so far is even when model subsidization is done, those open source models are quite viable alternatives.
zozbot234 4 hours ago [-]
> But my impression so far is even when model subsidization is done, those open source models are quite viable alternatives.
My understanding is that DeepSeek V4 Pro is going to be uniquely good at working on consumer platforms with SSD offload, due to its extremely lean KV cache. Even if you only have a slow consumer platform, you should be able to just let it grind on a huge batch of tasks in parallel entirely unattended, and wake up later to a finished job.
AIUI, people are even experimenting with offloading the KV cache itself to storage, which may unlock this batching capability even beyond physical RAM limits as contexts grow. (This used to be considered a bad idea with bulky KV caches, due to concerns about wearout and performance, but the much leaner KV cache of DeepSeek V4 changes the picture quite radically.)
torginus 1 hours ago [-]
Good. It's hard to overstate how nervous most executives are about relying on cloud-based providers.
AI currently works basically by sending your entire codebase and workflow, and internal communication over the internet to some third party provider, and your only protection is some legal document say they pinky promise they won't train on your data.
And said promise is made by people whose entire business model relies on being able to slurp up all the licensed content on the internet and ignore said licensing, on the defense of being too big to fail.
zozbot234 1 hours ago [-]
Yes, this is the most obvious argument for local AI inference. "Why buy cloud-based SOTA AI? We have SOTA AI at home." It's great that DeepSeek may now be about to make this possible, once the support in local inference frameworks is up to the task.
adonese 4 hours ago [-]
Is there any place I can read about KV? Excuse my ignorance as I'm not familiar with this topic and I read scattered notes that deepseek's cost are well optimized due to how their kv cache work. But I want to read more how kv cache relates to the inference stack and where does it actually sit.
> AIUI, people are even experimenting with offloading the KV cache itself to storage, which may unlock this batching capability even beyond physical RAM limits as contexts grow.
Especially this point. Any reason that this idea was considered bad? Is it due to the speed difference between the GPU VRAM to the RAM?
zozbot234 3 hours ago [-]
KV cache generally grows linearly with your current context; it gets filled-in with your prompts during prompt processing, and newly created context gets tacked on during token generation. LLM inference uses it to semantically relate the currently-processed token to its pre-existing context.
> Any reason that this idea was considered bad?
Because the KV cache was too big, even for a small context. This is still an issue with open models other than DeepSeek V4, though to a somewhat smaller extent than used to be the case. But the tiny KV of DeepSeek V4 is genuinely new.
spaceman_2020 2 hours ago [-]
have you used it for non coding tasks via MCP, like Figma/Paper for design or Ableton MVP for sound design?
The token cost makes it tempting to use for token-heavy tasks like this
syntex 1 hours ago [-]
Not sure you can replace Claude with DeepSeek V4 that easily and have same results.
From what I see while building my own agentic system in Elixir, the problem is in training for your specific harness/contracts. Claude/GPT-style models seem to be trained around very specific contracts used by the harness like tool call formats, planning structure, patching, reading files, recovering from errors, and knowing when to stop.
In practice, you either need a very strong general model that can infer and follow those contracts (expensive), or a weaker model that has been fine-tuned / trained specifically on your own agent contracts. Otherwise, the whole thing becomes flaky very quickly. And I suspect with Deepseek V4 you may get last options.
dandaka 8 minutes ago [-]
I hope they collaborate with open source harness providers (Pi, Opencode) and train models with those. So next generations will have better integration and better overall quality.
o10449366 17 minutes ago [-]
Idk, my recent experience with Claude is that 4.7 barely knows how to use basic bash tools - how to properly check when programs have finished running, even basic stuff like how to run pytest suites and read the failed tests from the output without re-running the suite to specifically look for them. It's shockingly dumb for all of the tooling they've built into Claude Code (the useless Monitoring tool that blocks bash polling/sleeping that actually works, etc.).
I finally get fed up and started using GPT 5.5 the past 4 days and its a breath a fresh air despite feeling much more minimal. With Claude I had to write so many hooks to enforce behaviors it wouldn't remember and it lacked common sense on. GPT 5.5 does a much better job with things like knowing the AWS CDK CLI can hang on long CloudFormation deployments and it should actively check the deployment status using CloudFormation API rather than hanging for 30+ minutes - and it does this all without asking.
Maybe there's better tooling built into Codex too, but at least on the surface level it seems like how smart the model is makes a significant difference because Claude has more tools than I can count and still struggles to use "grep".
Edit: Like just now - I can't tell you how many times I day I see this sequence:
"Sorry, I'll run in parallel"
"Error editing file"
"File must be read first"
Repeat 10x for the 10 subagents Claude spawned and then it gets stuck until you press escape and it says "You rejected the parallel agents. Running directly now"
cpursley 41 minutes ago [-]
I love to learn more about the system you’re building out in Elixir and your learnings if any of it is public.
dalekkskaro 29 minutes ago [-]
[flagged]
vitaflo 11 hours ago [-]
I'm not exactly sure what the point of this is. Deepseek already has instructions to use its API with many CLI's including Claude Code directly:
The readme absolutely buries the features that are actually non-trivial: It runs a proxy to switch models mid-session, and does combined cost tracking between Anthropic and other models you might be using. The LLM that wrote the readme never updated the general project description to highlight these features.
There probably isn't a point. Someone didn't understand something, didn't research it, so they 1 shotted their first thought and sent it to the front page of HN and all of their socials. It's the future bruh
georgeburdell 6 hours ago [-]
I embrace it at this point. It ends all the shilling of vibe coded tools at work that I have endured over the past year. Everyone can now make their own tools with zero obligation to coordinate beyond shared hardware resources
altmanaltman 6 hours ago [-]
To be fair, HN sent it to the front page, not the user. The rest I agree.
dev_hugepages 4 hours ago [-]
And now, because we all upvoted and commented on it, the vibe coded slop of the new user is on the front page now.
2ndorderthought 22 minutes ago [-]
Same place same time tomorrow?
croes 10 hours ago [-]
From vibe coders for vibe coders
2ndorderthought 9 hours ago [-]
I don't always copy paste vibe coded project readme mds into Claude code and ask them to rewrite it but when I do... actually that's all I do now because my goal in life is to make wealthy overvalued companies wealthier.
incrudible 4 hours ago [-]
Anthropic is the opposite of wealthy, the more you use their service, the more money they lose. Unless you think your precious MDs being used for training data is gonna make them rich eventually.
adastra22 3 hours ago [-]
Their marginal inference cost is less than what they charge for it. Normally that is considered profitable...
yard2010 2 hours ago [-]
It's not the md files it's how you interact with their agents.
kordlessagain 8 hours ago [-]
Problem?
5 hours ago [-]
crooked-v 10 hours ago [-]
I'm curious how well it actually works. I tried Deepseek with Hermes and Opencode and it seemed extremely bad about using some of the basic tools given, like the Hermes holographic memory tools, even with system prompt instructions strongly pointing them out.
ttoinou 11 hours ago [-]
I thought the tool format wasnt exactly the same ? So plugging any IA into claude code requires a conversion of format
selcuka 9 hours ago [-]
DeepSeek has a dedicated Anthropic-compatible endpoint [1].
Many of them expose “anthropic-compatible” APIs for this very purpose.
faangguyindia 7 hours ago [-]
qwen also offers openai compatible endpoint.
rsanek 1 hours ago [-]
>DeepSeek V4 Pro scores 96.4% on LiveCodeBench and costs $0.87/M output tokens
This is a heavily subsidized price and will only last until the end of the month: "The deepseek-v4-pro model is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC." [0]
The "supported backends" table is also deceiving -- while OpenRouter's server's may be in the US, the only way to get the $0.44/$0.87 pricing is to pass through to the DeepSeek API, which of course is China-based. [1]
I do think the model is quite good, I myself use it through Ollama Cloud for simple tasks. But I think some folks have bought in a little too much to the marketing hype around it.
They expect inference prices to structurally drop once they receive their big batch of Huawei Ascend chips by the second half of the year.
justech 10 hours ago [-]
If you're looking for Claude Code alternatives, I would first suggest looking into pi.dev or opencode for your harness. And then for models, you can choose from OpenCode Go (IMO most cost effect at this moment), OpenRouter, or direct from DeepSeek. Better if you go the Kimi route IMO and just buy a subscription from kimi.com
Looks interesting. Does it offer anything special that pi.dev or opencode does not?
wolttam 4 hours ago [-]
Probably not, `lmcli` is very lean. I would consider it a slightly lower-level tool than either pi.dev or opencode. E.g. there is no built-in coding agent, but it's easy to build one up in the config with your own prompt (or use the example).
It's proven useful for me, and I figure others might appreciate how light of a shim it is between you and the models.
Aeroi 10 hours ago [-]
agreed. OpenCode is a strong base, and with a couple modifications it can become a very effective harness. my sideproject mouse.dev I’ve been combining parts from OpenCode, Claude Code, and Hermes to build a cloud agent architecture that works well from mobile.
CharlesW 10 hours ago [-]
> OpenCode is a strong base, and with a couple modifications it can become a very effective harness.
I personally didn't find it to be competitve with Claude Code as a harness. Can I ask how you modified it to perform better?
Aeroi 9 hours ago [-]
I haven’t run formal evals but i improved the experience for my own needs and it feels noticeably better with these modifications.
-Claude-style subagents
-an MCP layer for higher-level tools
-Cursor-style control plane modes like Ask, Plan, Debug, and Build.
The MCP layer lets the harness use things like GitHub file/code read, PR creation, web search/fetch, structured user questions, plan-mode switching, user skills, and subagents.
So the improvement is mostly from better ui/ux orchestration and tool access. There's some things from hermes that are interesting as well.
Most of my focus has been on applying this stack to sandboxed cloud agents so you can properly code and work from mobile devices.
I can't definitively say that the stack is better or worse than Claude code, more just tuned for my use case I guess.
adobrawy 5 hours ago [-]
I'm a Claude Code Web fan and a rather heavy user. So I was interested in your product. However, I couldn't find an answer on the website. What parts did you find so good that you ported them?
cpursley 38 minutes ago [-]
How does the kimi subscription compare to Codex and Claude Code in terms of how much mileage you get for the pricing? I mean, I see the prices but not sure how usage that buys.
aaurelions 10 hours ago [-]
Another very cost-effective option is Ollama Cloud. In a month of use, I only hit the 5-hour limit once, when I ran 8 agents simultaneously for 2 hours.
10 hours ago [-]
kopirgan 7 hours ago [-]
On which tier?
postatic 10 hours ago [-]
definitely worth it - have both ollama cloud, opencode and hermes running to test them all out, working great so far.
bakugo 10 hours ago [-]
> I would first suggest looking into pi.dev
Looked into this one. Thought it was suspicious that it only had 7 open issues on github. Turns out they have a bot that auto-closes every single issue just because.
> Maintainers review auto-closed issues daily and reopen worthwhile ones. Issues that do not meet the quality bar below will not be reopened or receive a reply.
Seems like not an unreasonable way to deal with the problem of large numbers of low quality issues being submitted.
oefrha 4 hours ago [-]
If that process actually happens then there’s absolutely no reason not to have the reviewing maintainer close it after review instead. The only reasonable conclusion is that documented process is aspirational at best and vibed itself at worst.
cromka 5 hours ago [-]
Sounds like a perfect way to agitate the community going against the established culture like that.
altmanaltman 6 hours ago [-]
But how is it any different from keeping them open?
Like if they are going to sort through all the issues eventually (like they claim), why not just close the ones that are not worthy when they get to them instead of closing all by default?
Is it just so that the project doesnt have open issues on its github page? But they are open issues in reality because the maintainer will eventually go through them?
Nothing is "unreasonable" in the sense that an open source project should have the right to do what it wants with its rules but its definitely a weird stance.
mellosouls 3 hours ago [-]
They address the decision at the end of those contribution guidelines linked above, specifically:
It is a guardrail against burnout and tracker spam
Its based on their implied perspective that the majority of submissions don't follow those guidelines which helps determine their quality threshold.
> But how is it any different from keeping them open?
If all open issues are actionable items, that makes expected workload a lot easier to handle.
If most open issues are actually in "needs triage / needs review" state, you lose the signal from the noise.
The issue tracker for a project exists primarily as a tool for maintainers, not for outsiders. Yes, the maintainers could change their workflow to create a new view that only shows triaged tickets.
Or, they could ensure the default 'open' view serves their needs.
vanchor3 2 hours ago [-]
Somehow going through closed issues just to reopen them sounds like more effort than just using the built in label system which is made for this purpose, but maybe that's just me.
oarsinsync 12 minutes ago [-]
I can either change my daily workflow to accommodate the noisy herd, or I can change the noisy herd to accommodate my daily workflow.
__cayenne__ 7 hours ago [-]
The maintainer, Mario, sometimes declares the repo is on an “issue holiday” where issues are auto closed. This particular holiday is because there is a big refactor coming up. In non holiday periods issues can be reported as normal.
The idea is for it to he extremely minimal which strikes me as a very opinionated stance, and not opinions I agree with.
justinhj 7 hours ago [-]
It's a very interesting project. Many popular open source projects are inundated with poor quality issues and prs, hence the defences they are starting to erect.
DeathArrow 4 hours ago [-]
>If you're looking for Claude Code alternatives, I would first suggest looking into pi.dev or opencode for your harness.
While those are nice, Claude Code has the largest amount of plugins and skills I want to use.
wizhi 3 hours ago [-]
Aren't skills just literal plaintext files? Why not just copy them?
isege 3 hours ago [-]
> Claude Code is the best autonomous coding agent.
If you look at the terminal-bench@2.0 leaderboard, you'll quickly see it's actually one of the weakest agentic harnesses. Anthropic's own models score lower with Claude Code than with virtually any other harness.
So it's quite the opposite. Claude Code is arguably the worst harness to run models with.
DaanDL 2 hours ago [-]
Okay, but not all results on there are valid, ForgeCode for instance has been cheating in the past:
Just want to say that I faced this very problem the last week, I discovered OpenCode agent and it works great, with DeepSeek and other models. Try it out guys.
column 1 hours ago [-]
Pi will blow your mind :)
aucisson_masque 1 hours ago [-]
No MCP.
No sub-agents. There's many ways to do this. Spawn Pi instances via tmux, or build your own with extensions, or install a package that does it your way.
No permission popups. Run in a container, or build your own confirmation flow with extensions inline with your environment and security requirements.
No plan mode. Write plans to files, or build it with extensions, or install a package.
No built-in to-dos. Use a TODO.md file, or build your own with extensions.
No background bash. Use tmux. Full observability, direct interaction.
TheServitor 4 hours ago [-]
It's surprisingly easy to hit $200 worth of tokens even at ~$1/M token though. No matter how many times I do the math the coding plans are the better value.
_345 11 hours ago [-]
If you're okay with sonnet level performance, this sounds like a straight upgrade. But I find that sonnet messes up too much, that it ends up not being worth cost optimizing down to using it or another sonnet-level model. Glad to have this as an option though
2ndorderthought 11 hours ago [-]
A lot of people are having good experiences doing things like using opus for designing and using locally hosted qwen3.6 for implementation.
I could see a serious cost reduction story by using opus for design and deepseek for implementation.
Personally I would avoid anthropic entirely. But I get why people don't.
girvo 11 hours ago [-]
Like me: that’s what I do. Either Opus 4.7 or GLM 5.1 for planning, write it out to a markdown file, then farm it out to Qwen 3.6 27B on my DGX Spark-alike using Pi. Works amusingly well all things considered.
brianjking 8 hours ago [-]
How are you interacting with GLM 5.1? Via the Claude Code harness? I really wish they'd release a fully multimodal model already.
2ndorderthought 11 hours ago [-]
How is glm 5.1? I have t tried it yet but have been meaning too
girvo 10 hours ago [-]
It's surprisingly good. Beats MiniMax 2.7 and Qwen 3.5 Plus in my testing (I haven't tested 3.6 plus though), quite handily. It's far better than Sonnet, and often equivalent to Opus for the web development and OCaml tasks I'm using it for. It definitely isn't Opus 4.7, but its far good enough to earn it's keep and is substantially cheaper.
sshine 9 hours ago [-]
I agree with this. And also: it uses more thinking time to reach this. So while you get a lot of tokens on their plan, the peak 3x token usage multiplier + the extra thinking means you run into the rate limit anyways.
girvo 9 hours ago [-]
True, though the $20 equivalent used for planning only I don’t hit those limits often, vs Claude where the Pro can literally hit limits with a single prompt haha
Alifatisk 4 hours ago [-]
I second this, glm-5.1 is incredible.
aftbit 11 hours ago [-]
What hardware are you using to power this?
girvo 10 hours ago [-]
> DGX Spark-alike
Probably wasn't clear enough if you don't know what that is already, apologies
It's an Asus Ascent GX10, which is a little mini PC with 128GB of LPDDR5X as shared memory for an Nvidia GB10 "Blackwell" (kind of, it's a long story) GPU and a MediaTek ARM CPU
sterlind 7 hours ago [-]
pulls up chair
could you tell me the long story?
edit: or wait, is it quasi-Blackwell the way all DGX Sparks are quasi-Blackwell? like the actual silicon is different but it's sorta Blackwell-shaped?
girvo 7 hours ago [-]
Yeah exactly. Shader model 121 is different to SM 120 (consumer Blackwell) and is different again to data centre Blackwell SM100.
The promise of this chip was “write your code locally, then deploy to the same architecture in the data centre!”
Which is nonsense, because the GB10 is better described as “Hopper with Blackwell characteristics” IMO.
Still great hardware, especially for the price and learning. But we are only just starting to get the kernels written to take advantage of it, and mma.sync is sad compared to tcgen05
aftbit 10 hours ago [-]
Ah yeah I saw that, I was just curious which particular mini-PC you were using. I was considering picking up one of the various AI Max 395 boxes before the RAMpocalypse but didn't take the plunge. Thanks for the response!
girvo 9 hours ago [-]
I heavily considered one of the AMD Strix Halo boxes, but part of the reason I wanted this was to learn CUDA :)
chrsw 10 hours ago [-]
I keep re-learning this lesson: I chug along with a lesser model then throw a problem at it that's too complex. Then I try different models until I give up and bring in Opus 4.6 to clean up.
brianwawok 10 hours ago [-]
And I keep using Opus to like, make git commits. Really just need a smart router that is actually smart, vs having to micromanage model
sterlind 7 hours ago [-]
the problem is managing the contexts. your session might fit in Opus, but will that smaller model you dispatch the git commit to fit? even so, will it eat too much on prefill? do you keep compactions around for this, or RAG before dispatch or something? how do you button back up the response?
all doable but all vaguely squishy and nuanced problems operationally. kinda like harness design in general.
energy123 7 hours ago [-]
It's not even that much cheaper, GPT 5.5 is about 2x more expensive per task than Deepseek v4 Pro when you adjust for less token usage, according to Artificial Analysis. Doesn't seem worth it to me.
cpursley 29 minutes ago [-]
Are we talking pay as you go API or vs plans?
Culonavirus 5 hours ago [-]
We're not yet at a point of saturation when all the frontier models would be of somewhat comparable "intelligence" and we could decide which to use based on other factors (speed, effective context window etc.), so I honestly don't see why would you (as a company or an employee) not use the best available model with the highest (or at least second highest) thinking effort. The fees are not exactly cheap, but not that expensive either.
nyssos 4 hours ago [-]
Agreed that we're not at saturation, but we don't have a canonical "best" either. For example ChatGPT 5.5 + Codex is, in my experience, vastly superior to Opus 4.7 + Claude Code at sufficiently well-specified Haskell, but equally vastly inferior at correctly inferring my intent. Deepseek may well have its own niche, though I haven't used it enough to guess what it might be.
maxdo 7 hours ago [-]
This is the problem: you need the best model, not just a good one, for:
- Good architecture, which requires reading specs, code, etc. reads like: lots of tokens in/out
- Bug fixing — same, plus logs, e.g. datadog
Once you've found the path, patches are trivial and the savings are tiny unless you're doing refactoring/cleanup.
testing gets more and more complicated. Take a look at opencode go, and you see this:
>Includes GLM-5.1, GLM-5, Kimi K2.5, Kimi K2.6, MiMo-V2-Pro, MiMo-V2-Omni, MiMo->V2.5-Pro, MiMo-V2.5, Qwen3.5 Plus, Qwen3.6 Plus, MiniMax M2.5, MiniMax M2.7, >DeepSeek V4 Pro, and DeepSeek V4 Flash
and now on your own with bugs, all of these models can produce at scale. Am i missing anything in this picture. What is the real use of cheaper models?
JSR_FDED 4 hours ago [-]
I'd argue that you need the model that's good enough, not the best.
mohsen1 3 hours ago [-]
This has been my experience working on tsz.dev. Only Opus 4.7 and GPT 5.5 can really be productive for the remaining test cases.
willio58 10 hours ago [-]
I don’t find this with sonnet at all. As long as I have a solid Claude.md and periodically review the output and enforce good code practices via basic CI gates I’ve rarely ever found myself having to switch to opus
2ndorderthought 9 hours ago [-]
You might be surprised then at how good cheaper models solve your problems
11 hours ago [-]
shay1607m 39 minutes ago [-]
Interesting setup
do you have any benchmarks on:
- token usage over time
- failures/retry rates
would be great to see how it behaves in production
lukaslalinsky 3 hours ago [-]
I've been using DeepSeek v4 pro as an alternative to Claude models and for the first time I can see it as a real replacement. With the other Chinese models, I was missing something, but DeepSeek seems good enough for the kind of development I want to do.
dopeepsreaddocs 8 hours ago [-]
Did... Did you just ask an AI to one-shot something that normally amounts to no more than setting two env variables?
jay1996523 3 hours ago [-]
Claude code can already use the DeepSeek API, so what are the advantages of this tool?
nclin_ 8 hours ago [-]
Is claude code the best coding harness? Anyone running evals on that?
ahmadyan 8 hours ago [-]
In my anecdotal experience, it is not. Same model, opus, works better in 3P harnesses such as Factory Droid or Amp.
Claude code, on the other hand, is the most subsidized one, both for consumers (through max subscription) and for enterprises (token discounts). It is also heavily optimized for cost, specially token caching and reduced thinking, at the expense of quality.
DeathArrow 3 hours ago [-]
Terminal Bench is testing agent harness.
The best two are Codex and Forge Code.
However I am using plugins and skills that are only compatible with Claude Code or work best with Claude Code.
So, for me, Claude Code with plugins like claude-meme, Context Mode, Superpowers and Get Shit Done is better than other tools.
I think everyone should test multiple models and multiple agent harness for his specific needs, codebase and way of working.
alexdns 11 hours ago [-]
obviously vibe coded ( co authored ) + the prices dont even match
2ndorderthought 11 hours ago [-]
It's going to be real hard to find headlines that weren't vibe coded from here on out unfortunately.
SchemaLoad 10 hours ago [-]
Unless I actually know the author I assume everything here is vibeslop and full of mistakes.
Maybe I need to switch to some news publication that actually does real research and writing still. Because public forums like this have been completely destroyed by LLMs.
cyanydeez 11 hours ago [-]
welp, pack it it in boys, it was nice conceptualizing all you as real humans on the internet. I guess I'll just have to go touch grass if I want to feel parasocial.
dragontamer 10 hours ago [-]
I mean, we have the tech and community to actually build in person meetups and sign CRT certificates, right?
If we touch grass in person and swap certificate requests, we can actually rebuild a trust network.
This is a pretty old problem with regards to clubs / secret societies and whatnot. And with certificates / PKI, our modern security tools have solved all the technical problems.
2ndorderthought 10 hours ago [-]
I wish I could be invited to a secret club of guaranteed humans. Someone hand me a certificate next time you see me! Also don't stab me kthxbye
cyanydeez 10 hours ago [-]
Unfortunately, a lot of whats happening in the tech world seems to be from some super serious AI cults, so not sure goin offline like this is any better.
2ndorderthought 10 hours ago [-]
Yea but we could have fun. Play some dnd. Drink tea or whiskey. Eat pizza pie. Light saber battle. Buy a megaphone and hang out at a street corner telling passerbys they are perfectly acceptable and worthy of kindness and love
inciampati 9 hours ago [-]
poorly vibe coded. machines can check details easily, use them.
itrunsdoomguy 40 minutes ago [-]
Does it play Doom?
sowild_fun 6 hours ago [-]
Using a bunch of CLIs to work with DeepSeek V4, I've found that Langcli is the best fit for DeepSeek V4. For programming tasks, the cache hit rate is above 95%.
Not only can it seamlessly and dynamically switch between DeepSeek V4 Flash, V4 Pro, and other mainstream models within the same context, but it is also 100% compatible with Claude Code.
sfewfweg 6 hours ago [-]
Langcli + deepseek v4 is very good
5 hours ago [-]
orliesaurus 11 hours ago [-]
Is there a way to do this directly by using claudecode CLI (which I already have installed) and openrouter??
ANTHROPIC_BASE_URL="https://openrouter.ai/api" ANTHROPIC_AUTH_TOKEN="$OPENROUTER_API_KEY" ANTHROPIC_DEFAULT_SONNET_MODEL="deepseek/deepseek-v4-flash" CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC=1 claude
10 hours ago [-]
10 hours ago [-]
gnat 11 hours ago [-]
This repo's README explains how it works and you can do it yourself. claude looks for environment variables that say which API endpoint to talk to, which key to pass, which model name to use for haiku/sonnet/opus-level workloads, etc.
8 hours ago [-]
999900000999 7 hours ago [-]
I just spent half my day getting CUDA and LLAMA to work with my 5070TI.
I was able to use it in agent mode with Roo, I stopped after having it write out a plan, but I'll continue when I have more time.
Deepseek feels less likely to do a straight up rug pull since you can self host with enough money, but I'm still more excited about local solutions.
Usually I just need grunt work done. I'm not solving difficult problems.
Copenjin 3 hours ago [-]
I wonder if openrouter will replicate that 120x caching, I suppose they will?
langitbiru 7 hours ago [-]
I'm wondering why DeepSeek didn't create an AI coding agent like Kimi Code.
sourcecodeplz 42 minutes ago [-]
Think it is because they focus on what they know best. Coding a LLM harness is nothing spectacular.
vagab0nd 10 hours ago [-]
This has become a problem for me. I like trying new things. But I also know that in about a week, there's going to be a better/cheaper setup. And a week after that. And ideally I'd like to get some coding done when I'm not tinkering with the tools.
So I think I'll stay with CC for now.
kordlessagain 8 hours ago [-]
CC has the ability to use Ollama as well, which includes the ability for Ollama to proxy to Ollama's cloud models. It's brilliant, and works with a single Ollama command that doesn't mess with CC at all (so you can run them at the same time).
Nice, it's quite usefull to have a project like this which streamlines the setup necessary to use other "brains" in claude code "body". I personally will give this a try, but Ijust find the message on pricing a bit disingenuous, the deepseek price of "$0.87/M output tokens" is a discount, and this setup anyways needs a calude.ai subscription offering claude code, which now is 100$/month min.
karel-3d 1 hours ago [-]
Can I... somehow run this locally? DeepSeek is opensource? Do I even need their API key?
(I have no experience with running anything locally, maybe it's a stupid question)
zozbot234 45 minutes ago [-]
Waiting for official support in llama.cpp. There is a fork that can run a lightly quantized (Q2 expert layers) DeepSeek V4 Flash in 128GB RAM without offloading weight fetching from disk.
DeathArrow 4 hours ago [-]
You don't need Deep Claude. Claude Code is working with any model that exposes an endpoint for an Anthropic compatible API.
I am using Claude Code with GLM 5.1, MiniMax M2.7, Kimi K2.6 and Xiaomi MiMo V2.5 Pro.
dbeley 7 hours ago [-]
Honestly with the likes of Opencode / pi / hermes I don't really find the "Claude Code agent loop" part particularly interesting.
The edge Anthropic has on others lies on its models performance. CLI tooling (and obviously pricing) is definitely not better than others.
danny_codes 6 hours ago [-]
Except the model isn't particularly better anymore, as compared to the newest wave of FOSS models
Tanxsinxlnx 2 hours ago [-]
does it support aws bedrock provider
Lihh27 10 hours ago [-]
the wrapper is basically env var glue. You’re still betting the whole loop on Anthropic's closed client.
game_the0ry 10 hours ago [-]
Cost engineering [1] will be the next hot topic for AI.
[1] A fancier way of saying "reducing cost."
triyambakam 6 hours ago [-]
And if I don't care about cost, what about actual performance?
dukeofdoom 7 hours ago [-]
Is there some way to make claude/codex beep when it finishes a task.
0xjeffro 2 hours ago [-]
[dead]
esafak 11 hours ago [-]
Why wouldn't you use something open source like OpenCode, which already support DSv4 and has more features than CC?
CharlesW 10 hours ago [-]
Coding harnesses make a big difference, and OpenCode is notably less effective than Claude Code (1) in my experience, (2) with the models I've tried it on. (I've not yet tried it with DSv4.)
dlx 11 hours ago [-]
As someone who does use other models with CC, I am curious about opencode, what extra features does it have that you find essential?
esafak 10 hours ago [-]
I like being able to add a wide array of models, define perms for agents and subagents, turn MCPs on and off at will, and be able to fix bugs I find in it.
dlx 10 hours ago [-]
fair enough...any drawbacks that you've found?
esafak 10 hours ago [-]
Its UI isn't as slick, and it has bugs, but so does CC and you can submit a PR to have them fixed in OC.
DeathArrow 3 hours ago [-]
If using something open source, I'd say Forge Code has better results than Open Code, at least according to Terminal Bench.
ttoinou 11 hours ago [-]
More features than CC ?
Also opencode tracks you by default. Its not safe. Every first prompt you send is routed through their servers, logged and they can use your data however they want
sedawkgrep 10 hours ago [-]
I thought this was debunked awhile ago. ?
esafak 10 hours ago [-]
I could not find any evidence of prompt logging. The code is open; can you point me to it?
11 hours ago [-]
portsentinel 6 hours ago [-]
I am now thinking how far can agentic AI can go how far we can achieve
fHr 9 hours ago [-]
layer on layer on layer to refactor bunch of lines xD
10 hours ago [-]
2ndorderthought 11 hours ago [-]
Oh shoot now the next CC upgrade will blow your subscription for doing this
morpheos137 11 hours ago [-]
anthropic messed up big time harness works with any muh commodity LLM, meanwhile VCs were duped on the myth of FOOM AGI, probably not a cooincidence Anthropic is enmeshed with the scifi fan fic forum known as lesswrong. The world wants useful tools. The bay area bubble in contrast thrives on Mythos.
hgyyy 10 hours ago [-]
I think OAI and Anthropic will be ok for a year or two. But after that If they still continue to earn revenues from selling tokens to firms/software engineers they will be in serious trouble.
The American firms are not demonstrating escape velocity and as long as china offers something somewhat comparable and offers it at a very low price to compensate for any difference in quality, they will not be generating enough in cash flows to finance reinvestment. I highly doubt they’ll be able to continue raising external financing for numerous periods from here on out - they gotta start showing strong financials and that they are running away from the open source models.
LeFantome 8 hours ago [-]
The performance gap will likely close as Chinese hardware improves. This is happening very rapidly.
Already DeepSeek v4 is being hosted on Huawei Ascend 950. What do you think those cost relative to NVIDIA gear?
morpheos137 9 hours ago [-]
I wouldnt put it past the US gov to ban foreign models. they tried to ban tiktok. what is being demosrrated here is silicon valley can not withstand a competitive market.
LeFantome 8 hours ago [-]
Good luck banning Open Source models.
Not only that but other countries are very unlikely to follow suit, so it is just a straight-up productivity tax on the US.
morpheos137 8 hours ago [-]
Yeah see the Nvidia china us gov self own. The assumption seems to be 1.4 billion people in a middle income country are dependent on 300 million for tech.
bwfan123 8 hours ago [-]
> anthropic messed up big time harness works with any muh commodity LLM
that surprised me too. The intelligence is at the client, and by making that open, anthropic has commoditized the coding agent.
dividendflow 41 minutes ago [-]
[flagged]
aliljet 6 hours ago [-]
[dead]
alattaran 12 hours ago [-]
[flagged]
kk_mors 7 hours ago [-]
[dead]
volume_tech 9 hours ago [-]
[flagged]
deadbabe 10 hours ago [-]
I had a call with our CTO and we are pivoting away from Claude Code to DeepClaude because the cost savings are too substantial to ignore.
Rendered at 10:02:12 GMT+0000 (Coordinated Universal Time) with Vercel.
This is what I’ve been using for non-confidential projects for about a week now (soon after v4 came out). I honestly can’t tell the difference, but I’m not doing anything crazy with it either.
Worth noting that I don’t think DeepSeek‘s API lets you opt out of training. Once this is up on other providers though… (OpenRouter is just proxying to DeepSeek atm)
Also, the author checked in their apparently effective social media advertising plan: https://github.com/aattaran/deepclaude/commit/a90a399682defc... (which seems to be working)
I have been wanting to try CC with different models since Opus went downhill last month..
What limitations or issues have you noticed when using DeepSeek with Claude Code if any?
is flash version on level of gpt 5.4 mini
PS. mentioning amp because i used to use it and I pay directly for token. I topped up 5 usd so I will be going to use it and see how far can it take me. But my impression so far is even when model subsidization is done, those open source models are quite viable alternatives.
My understanding is that DeepSeek V4 Pro is going to be uniquely good at working on consumer platforms with SSD offload, due to its extremely lean KV cache. Even if you only have a slow consumer platform, you should be able to just let it grind on a huge batch of tasks in parallel entirely unattended, and wake up later to a finished job.
AIUI, people are even experimenting with offloading the KV cache itself to storage, which may unlock this batching capability even beyond physical RAM limits as contexts grow. (This used to be considered a bad idea with bulky KV caches, due to concerns about wearout and performance, but the much leaner KV cache of DeepSeek V4 changes the picture quite radically.)
AI currently works basically by sending your entire codebase and workflow, and internal communication over the internet to some third party provider, and your only protection is some legal document say they pinky promise they won't train on your data.
And said promise is made by people whose entire business model relies on being able to slurp up all the licensed content on the internet and ignore said licensing, on the defense of being too big to fail.
> AIUI, people are even experimenting with offloading the KV cache itself to storage, which may unlock this batching capability even beyond physical RAM limits as contexts grow.
Especially this point. Any reason that this idea was considered bad? Is it due to the speed difference between the GPU VRAM to the RAM?
> Any reason that this idea was considered bad?
Because the KV cache was too big, even for a small context. This is still an issue with open models other than DeepSeek V4, though to a somewhat smaller extent than used to be the case. But the tiny KV of DeepSeek V4 is genuinely new.
The token cost makes it tempting to use for token-heavy tasks like this
From what I see while building my own agentic system in Elixir, the problem is in training for your specific harness/contracts. Claude/GPT-style models seem to be trained around very specific contracts used by the harness like tool call formats, planning structure, patching, reading files, recovering from errors, and knowing when to stop.
In practice, you either need a very strong general model that can infer and follow those contracts (expensive), or a weaker model that has been fine-tuned / trained specifically on your own agent contracts. Otherwise, the whole thing becomes flaky very quickly. And I suspect with Deepseek V4 you may get last options.
I finally get fed up and started using GPT 5.5 the past 4 days and its a breath a fresh air despite feeling much more minimal. With Claude I had to write so many hooks to enforce behaviors it wouldn't remember and it lacked common sense on. GPT 5.5 does a much better job with things like knowing the AWS CDK CLI can hang on long CloudFormation deployments and it should actively check the deployment status using CloudFormation API rather than hanging for 30+ minutes - and it does this all without asking.
Maybe there's better tooling built into Codex too, but at least on the surface level it seems like how smart the model is makes a significant difference because Claude has more tools than I can count and still struggles to use "grep".
Edit: Like just now - I can't tell you how many times I day I see this sequence:
"Sorry, I'll run in parallel"
"Error editing file"
"File must be read first"
Repeat 10x for the 10 subagents Claude spawned and then it gets stuck until you press escape and it says "You rejected the parallel agents. Running directly now"
https://api-docs.deepseek.com/quick_start/agent_integrations...
Also the author checked in their advertising plan: https://github.com/aattaran/deepclaude/commit/a90a399682defc...
[1] https://api-docs.deepseek.com/guides/anthropic_api
This is a heavily subsidized price and will only last until the end of the month: "The deepseek-v4-pro model is currently offered at a 75% discount, extended until 2026/05/31 15:59 UTC." [0]
The "supported backends" table is also deceiving -- while OpenRouter's server's may be in the US, the only way to get the $0.44/$0.87 pricing is to pass through to the DeepSeek API, which of course is China-based. [1]
I do think the model is quite good, I myself use it through Ollama Cloud for simple tasks. But I think some folks have bought in a little too much to the marketing hype around it.
[0] https://api-docs.deepseek.com/quick_start/pricing [1] https://openrouter.ai/deepseek/deepseek-v4-pro/providers
It's proven useful for me, and I figure others might appreciate how light of a shim it is between you and the models.
I personally didn't find it to be competitve with Claude Code as a harness. Can I ask how you modified it to perform better?
-Claude-style subagents -an MCP layer for higher-level tools -Cursor-style control plane modes like Ask, Plan, Debug, and Build.
The MCP layer lets the harness use things like GitHub file/code read, PR creation, web search/fetch, structured user questions, plan-mode switching, user skills, and subagents.
So the improvement is mostly from better ui/ux orchestration and tool access. There's some things from hermes that are interesting as well.
Most of my focus has been on applying this stack to sandboxed cloud agents so you can properly code and work from mobile devices.
I can't definitively say that the stack is better or worse than Claude code, more just tuned for my use case I guess.
Looked into this one. Thought it was suspicious that it only had 7 open issues on github. Turns out they have a bot that auto-closes every single issue just because.
I honestly have no words.
> Maintainers review auto-closed issues daily and reopen worthwhile ones. Issues that do not meet the quality bar below will not be reopened or receive a reply.
Seems like not an unreasonable way to deal with the problem of large numbers of low quality issues being submitted.
Like if they are going to sort through all the issues eventually (like they claim), why not just close the ones that are not worthy when they get to them instead of closing all by default?
Is it just so that the project doesnt have open issues on its github page? But they are open issues in reality because the maintainer will eventually go through them?
Nothing is "unreasonable" in the sense that an open source project should have the right to do what it wants with its rules but its definitely a weird stance.
It is a guardrail against burnout and tracker spam
Its based on their implied perspective that the majority of submissions don't follow those guidelines which helps determine their quality threshold.
https://github.com/badlogic/pi-mono/blob/main/CONTRIBUTING.m...
If all open issues are actionable items, that makes expected workload a lot easier to handle.
If most open issues are actually in "needs triage / needs review" state, you lose the signal from the noise.
The issue tracker for a project exists primarily as a tool for maintainers, not for outsiders. Yes, the maintainers could change their workflow to create a new view that only shows triaged tickets.
Or, they could ensure the default 'open' view serves their needs.
https://github.com/badlogic/pi-mono/blob/main/CONTRIBUTING.m...
- https://news.ycombinator.com/item?id=46930961 - https://github.com/mitchellh/vouch
While those are nice, Claude Code has the largest amount of plugins and skills I want to use.
If you look at the terminal-bench@2.0 leaderboard, you'll quickly see it's actually one of the weakest agentic harnesses. Anthropic's own models score lower with Claude Code than with virtually any other harness.
So it's quite the opposite. Claude Code is arguably the worst harness to run models with.
https://debugml.github.io/cheating-agents/#sneaking-the-answ...
Yes and this is a temporary discount which increases to 3.48 USD on 2026/05/31 15:59 UTC.
Source: https://api-docs.deepseek.com/quick_start/pricing
No sub-agents. There's many ways to do this. Spawn Pi instances via tmux, or build your own with extensions, or install a package that does it your way.
No permission popups. Run in a container, or build your own confirmation flow with extensions inline with your environment and security requirements.
No plan mode. Write plans to files, or build it with extensions, or install a package.
No built-in to-dos. Use a TODO.md file, or build your own with extensions.
No background bash. Use tmux. Full observability, direct interaction.
I could see a serious cost reduction story by using opus for design and deepseek for implementation.
Personally I would avoid anthropic entirely. But I get why people don't.
Probably wasn't clear enough if you don't know what that is already, apologies
It's an Asus Ascent GX10, which is a little mini PC with 128GB of LPDDR5X as shared memory for an Nvidia GB10 "Blackwell" (kind of, it's a long story) GPU and a MediaTek ARM CPU
could you tell me the long story?
edit: or wait, is it quasi-Blackwell the way all DGX Sparks are quasi-Blackwell? like the actual silicon is different but it's sorta Blackwell-shaped?
The promise of this chip was “write your code locally, then deploy to the same architecture in the data centre!”
Which is nonsense, because the GB10 is better described as “Hopper with Blackwell characteristics” IMO.
Still great hardware, especially for the price and learning. But we are only just starting to get the kernels written to take advantage of it, and mma.sync is sad compared to tcgen05
all doable but all vaguely squishy and nuanced problems operationally. kinda like harness design in general.
Once you've found the path, patches are trivial and the savings are tiny unless you're doing refactoring/cleanup.
testing gets more and more complicated. Take a look at opencode go, and you see this:
>Includes GLM-5.1, GLM-5, Kimi K2.5, Kimi K2.6, MiMo-V2-Pro, MiMo-V2-Omni, MiMo->V2.5-Pro, MiMo-V2.5, Qwen3.5 Plus, Qwen3.6 Plus, MiniMax M2.5, MiniMax M2.7, >DeepSeek V4 Pro, and DeepSeek V4 Flash
and now on your own with bugs, all of these models can produce at scale. Am i missing anything in this picture. What is the real use of cheaper models?
do you have any benchmarks on: - token usage over time - failures/retry rates
would be great to see how it behaves in production
Claude code, on the other hand, is the most subsidized one, both for consumers (through max subscription) and for enterprises (token discounts). It is also heavily optimized for cost, specially token caching and reduced thinking, at the expense of quality.
The best two are Codex and Forge Code.
However I am using plugins and skills that are only compatible with Claude Code or work best with Claude Code.
So, for me, Claude Code with plugins like claude-meme, Context Mode, Superpowers and Get Shit Done is better than other tools.
I think everyone should test multiple models and multiple agent harness for his specific needs, codebase and way of working.
Maybe I need to switch to some news publication that actually does real research and writing still. Because public forums like this have been completely destroyed by LLMs.
If we touch grass in person and swap certificate requests, we can actually rebuild a trust network.
This is a pretty old problem with regards to clubs / secret societies and whatnot. And with certificates / PKI, our modern security tools have solved all the technical problems.
Not only can it seamlessly and dynamically switch between DeepSeek V4 Flash, V4 Pro, and other mainstream models within the same context, but it is also 100% compatible with Claude Code.
https://api-docs.deepseek.com/quick_start/agent_integrations...
I was able to use it in agent mode with Roo, I stopped after having it write out a plan, but I'll continue when I have more time.
Deepseek feels less likely to do a straight up rug pull since you can self host with enough money, but I'm still more excited about local solutions.
Usually I just need grunt work done. I'm not solving difficult problems.
So I think I'll stay with CC for now.
If you are interested, I've built an agentic terminal that helps manage these types of things better: https://deepbluedynamics.com/hyperia
(I have no experience with running anything locally, maybe it's a stupid question)
I am using Claude Code with GLM 5.1, MiniMax M2.7, Kimi K2.6 and Xiaomi MiMo V2.5 Pro.
The edge Anthropic has on others lies on its models performance. CLI tooling (and obviously pricing) is definitely not better than others.
[1] A fancier way of saying "reducing cost."
Also opencode tracks you by default. Its not safe. Every first prompt you send is routed through their servers, logged and they can use your data however they want
The American firms are not demonstrating escape velocity and as long as china offers something somewhat comparable and offers it at a very low price to compensate for any difference in quality, they will not be generating enough in cash flows to finance reinvestment. I highly doubt they’ll be able to continue raising external financing for numerous periods from here on out - they gotta start showing strong financials and that they are running away from the open source models.
Already DeepSeek v4 is being hosted on Huawei Ascend 950. What do you think those cost relative to NVIDIA gear?
Not only that but other countries are very unlikely to follow suit, so it is just a straight-up productivity tax on the US.
that surprised me too. The intelligence is at the client, and by making that open, anthropic has commoditized the coding agent.