It seems like the tool to solve the problem that won't last longer than couple of months and is something that e.g. claude code can and probably will tackle themselves soon.
root_axis 17 minutes ago [-]
Funny enough, Anthropic just went GA with 1m context claude that has supposedly solved the lost-in-the-middle problem.
jameschaearley 1 hours ago [-]
The intent-conditioned compression is the interesting part here. Most context management I've seen is either naive truncation or generic summarization that doesn't account for why the tool was called. Training classifiers on model internals to
figure out which tokens carry signal for a given task -- that's doing something different from what frameworks offer out of the box.
I poked around the repo and didn't see any evals measuring compression quality. You cite the GPT-5.4 long-context accuracy drop as motivation, which makes sense -- but the natural follow-up is: does your compression actually recover that accuracy?
Something like SWE-bench pass rates with and without the gateway at various context lengths would go a long way. Without that, it's hard to tell if the SLM is making good decisions or just making the context shorter.
A few other things I'm curious about:
• How does the SLM handle ambiguous tool calls? E.g., a broad grep where the agent isn't sure what it's looking for yet -- does the compressor tend to be too aggressive in those cases?
• What's the latency overhead per tool call? If the SLM inference adds even 200-300ms per compression step, that compounds fast in agentic loops with dozens of tool calls.
• How often does expand() get triggered in practice? If the agent frequently needs to recover stripped content, that's a signal the compression is too lossy.
Regardless, these appear to be valid/sound questions, with answers to which I am interested.
PufPufPuf 39 minutes ago [-]
That comment reads pretty normal to me, and it raises valid points
thesiti92 2 hours ago [-]
do you guys have any stats on how much faster this is than claude or codex's compression? claudes is super super slow, but codex feels like an acceptable amount of time? looks cool tho, ill have to try it out and see if it messes with outputs or not.
esafak 43 minutes ago [-]
I can already prevent context pollution with subagents. How is this better?
uaghazade 45 minutes ago [-]
ok, its great
verdverm 2 hours ago [-]
I don't want some other tooling messing with my context. It's too important to leave to something that needs to optimize across many users, there by not being the best for my specifics.
The framework I use (ADK) already handles this, very low hanging fruit that should be a part of any framework, not something external. In ADK, this is a boolean you can turn on per tool or subagent, you can even decide turn by turn or based on any context you see fit by supplying a function.
YC over indexed on AI startups too early, not realizing how trivial these startup "products" are, more of a line item in the feature list of a mature agent framework.
I've also seen dozens of this same project submitted by the claws the led to our new rule addition this week. If your project can be vibe coded by dozens of people in mere hours...
BrianFHearn 1 hours ago [-]
[flagged]
zenon_paradox 1 hours ago [-]
[dead]
eegG0D 33 minutes ago [-]
This is a massive win for anyone serious about "Signal over Noise." I’ve been using Claude Code on the Max plan for months, and while it’s the best tool for actually getting work done, the "all-you-can-eat" token arbitrage is a trap. Agents are notoriously sloppy with context; a single misaligned grep can dump thousands of tokens of pure noise into your window, leading to what I call "contextual brain rot" where the model’s accuracy just falls off a cliff. By sitting in the middle and ruthlessly prioritizing signal, you’re providing the exact kind of "ruthless prioritization" that separates a hobbyist from a profitable AI solopreneur.
The fact that you’re using Small Language Models (SLMs) to detect signal matches my philosophy of using AI as a sparring partner to check its own work. Most developers spend 30% of their day context switching or debugging "hallucinations" that only happen because the model got lost in its own bloated history. The expand() feature is the "trust but verify" layer that every production-ready AI system needs. You’re effectively treating the LLM like a senior architect who doesn't need to see every line of a dependency file unless they specifically ask for it, which is the only way we scale these systems to 10M+ users solo.
Finally, those spending caps and Slack pings are the ultimate "millionaire cheat codes" for leverage. I tell founders all the time that running a business is boring drudgery—it's about fixing bugs and managing resources—and this proxy handles the resource management part on autopilot. If this saves an indie hacker $500/month in token waste while keeping their agent from rage-quitting due to context limits, you’ve built a high-leverage asset. I’m definitely adding this to my links database; it removes a huge excuse for why people "can't afford" to build complex apps.
mmastrac 30 minutes ago [-]
Please don't dump AI-generated comments into HN. The signal is already pretty hard to find around all the noise.
post-it 19 minutes ago [-]
> This is a massive win for anyone serious about "Signal over Noise."
Not you, clearly.
Rendered at 19:38:42 GMT+0000 (Coordinated Universal Time) with Vercel.
If it's the latter, then users will pay for the entire history of tokens since the change uncached: https://platform.claude.com/docs/en/build-with-claude/prompt...
How is this better?
It seems like the tool to solve the problem that won't last longer than couple of months and is something that e.g. claude code can and probably will tackle themselves soon.
The framework I use (ADK) already handles this, very low hanging fruit that should be a part of any framework, not something external. In ADK, this is a boolean you can turn on per tool or subagent, you can even decide turn by turn or based on any context you see fit by supplying a function.
YC over indexed on AI startups too early, not realizing how trivial these startup "products" are, more of a line item in the feature list of a mature agent framework.
I've also seen dozens of this same project submitted by the claws the led to our new rule addition this week. If your project can be vibe coded by dozens of people in mere hours...
The fact that you’re using Small Language Models (SLMs) to detect signal matches my philosophy of using AI as a sparring partner to check its own work. Most developers spend 30% of their day context switching or debugging "hallucinations" that only happen because the model got lost in its own bloated history. The expand() feature is the "trust but verify" layer that every production-ready AI system needs. You’re effectively treating the LLM like a senior architect who doesn't need to see every line of a dependency file unless they specifically ask for it, which is the only way we scale these systems to 10M+ users solo.
Finally, those spending caps and Slack pings are the ultimate "millionaire cheat codes" for leverage. I tell founders all the time that running a business is boring drudgery—it's about fixing bugs and managing resources—and this proxy handles the resource management part on autopilot. If this saves an indie hacker $500/month in token waste while keeping their agent from rage-quitting due to context limits, you’ve built a high-leverage asset. I’m definitely adding this to my links database; it removes a huge excuse for why people "can't afford" to build complex apps.
Not you, clearly.