NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Cord: Coordinating Trees of AI Agents (june.kim)
mirekrusin 17 minutes ago [-]
Nice one.

It looks to me like spawn/fork is unnecessary complexity to distinguish both.

IMHO you should make context query the first class primitive instead.

I'd keep just one, drop the other and add context query parameter.

Context query parameter can be natural language instruction how to compact current context passed to subagent.

When invoking you can use values like "empty" (nothing, start fresh), "summary" (summarizes), "relevant information from web designer PoV" (specific one, extract what's relevant), "bullet points about X" etc.

This way LLM can decide what's relevant, express it tersly and compaction itself will not clutter current context – it'll be handled by compaction subagent in isolation and discarded on completion.

I think with this approach you'd get better quality results.

What makes it first class is the fact that it has to be built in tool that has access to context (client itself), ie. it can't be implemented by isolated MCP because you want to avoid rendering context as input parameter during tool call, you just want short query.

It could be called "spawn" or "fork" equally but if you want to keep it backward compatible or compare results in evals you may want to call this approach ie. "handover(prompt, context_query) -> conversation_id".

sriku 44 minutes ago [-]
We built something like this by hand without much difficulty for a product concept. We'd initially used LangGraph but we ditched it and built our own out of revenge for LangGraph wasting our time with what could've simply been an ordinary python function.

Never again committing to any "framework", especially when something like Claude Code can write one for you from scratch exactly for what you want.

We have code on demand. Shallow libraries and frameworks are dead.

dcre 1 hours ago [-]
Not exactly a surprise Claude did this out of the box with minimal prompting considering they’ve presumably been RLing the hell out of it for agent teams: https://code.claude.com/docs/en/agent-teams
vlmutolo 1 hours ago [-]
I wonder if the “spawn” API is ever preferable over “fork”. Do we really want to remove context if we can help it? There will certainly be situations where we have to, but then what you want is good compaction for the subagent. “Clean-slate” compaction seems like it would always be suboptimal.
jamilton 1 hours ago [-]
Feels very AI written in a way that makes it annoying to read with all the repetitive short sentences.

Neat concept though, would be cool to see some tests of performance on some tasks.

mikert89 49 minutes ago [-]
all of these frameworks will go away once the model gets really smart. it will just be tool search, tools, and the model

in the short run, ive found the open ai agents one to be the best

cjonas 20 minutes ago [-]
This approach seems interesting, but in my experience, a single "agent" with proper context management is better than a complicated agent graph. Dealing with hand-off (+ hand back) and multiple levels of conversations just leaves too much room for critical information to get siloed.

If you have a narrow task that doesn't need full context, then agent delegation (putting an agent or inference behind a simple tool call) can be effective. A good example is to front your RAG with a search() tool with a simple "find the answer" agent that deals with the context and can run multiple searches if needed.

I think the PydanticAI framework has the right approach of encouraging Agent Delegation & sequential workflow first and trying to steer you away graphs[0]

[0]:https://ai.pydantic.dev/graph/

mbirth 46 minutes ago [-]
Not to be confused with:

cord - The #1 AI-Powered Job Search Platform for people in tech

frk_ai_8b2e 2 hours ago [-]
[flagged]
frk_ai_8b2e 2 hours ago [-]
[flagged]
fritzo 2 hours ago [-]
Would those agents happened to be named frk_ai_8b2e and that platform news.ycombinator.com?
infecto 2 hours ago [-]
Glad you said it first. I thought it was particular comment length and then two back to back same minute comments to be strange.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 04:15:29 GMT+0000 (Coordinated Universal Time) with Vercel.