I'm not sure how valid most of these points are. A lot of the latency in an agentic system is going to be the calls to the LLM(s).
From the article:
"""
Agents typically have a number of shared characteristics when they start to scale (read: have actual users):
They are long-running — anywhere from seconds to minutes to hours.
Each execution is expensive — not just the LLM calls, but the nature of the agent is to replace something that would typically require a human operator. Development environments, browser infrastructure, large document processing — these all cost $$$.
They often involve input from a user (or another agent!) at some point in their execution cycle.
They spend a lot of time awaiting i/o or a human.
"""
No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).
I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.
> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...
Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.
tptacek 2 days ago [-]
That's true from a performance perspective but, in building an agent in Go, I was thankful that I had extremely well-worn patterns to manage concurrency, backlogs, and backpressure given that most interactions will involve one or more transactions with a remote service that takes several seconds to respond.
(I think you can effectively write an agent in any language and I think Javascript is probably the most popular choice. Now, generating code, regardless of whether it's an agent or a CLI tool or a server --- there, I think Go and LLM have a particularly nice chemistry.)
philwelch 2 days ago [-]
Go still has a much better concurrency story. It’s also much less of a headache to deploy since all you need to deploy is a static binary and not a whole bespoke Python runtime with every pip dependency.
TypingOutBugs 2 days ago [-]
Go is definitely better, but with uv you can install all dependencies including python with only curl
ashwinsundar 1 days ago [-]
Is that what uv sync does under the hood, just curl’s over all dependencies and the python version defined in .python-version?
Hasnep 22 hours ago [-]
I think they meant you can use curl to install uv and then you don't need to (manually) install anything else
TypingOutBugs 13 hours ago [-]
Yeah that’s what I meant, apologies if unclear
ramesh31 1 days ago [-]
Agents are the orchestration layer, i.e. a perfect fit for Go (or Erlang, or Node). You don't need a "mountain of AI-related libraries" for them, particularly given the fact that what we call an agent now has only existed for less than 2 years. Anything doing serious IO should be abstracted behind a tool interface that can (and should) be implemented in whatever domain specific tooling is required.
serjester 1 days ago [-]
I wouldn’t underestimate the impact of having massive communities around a language. Basically any problem you have has likely already been solved by 10 other people. With AI being as frothy as it is, that’s incredibly valuable.
Take for example something like being able to easily swap models, in Python it’s trivial with litellm. In niche languages you’re lucky to even have an official, well mantained SDK.
pdimitar 14 hours ago [-]
And I wouldn't overestimate a language's popularity. It's mostly a social phenomenon and rarely has anything to do with technical prowess.
I agree that integration with the separate LLMs / agents can and does accelerate initial development. But once you write the integration tooling in your language of choice -- likely a few weeks worth of work -- then it will all come down to competing on good orchestration.
Your parent poster is right: languages like Erlang / Elixir or Golang (or maybe Rust as well) are better-equipped.
jeswin 22 hours ago [-]
Go has few advantages for this kind of workload - most of the time it'll just be waiting on io. And you suffer from the language itself; many type system features that you get for free in modern langauges require workarounds in Go.
I've found that TypeScript is an excellent glue language for all kinds of AI. Python, followed by TS enjoy broad library support from vendors. I personally prefer it over Python because the type system is much more expressive and mature. Python is rapidly improving though.
> It turns out, cancelling long-running work in Node.js and Python is incredibly difficult for multiple reasons:
Evidence is lacking for this claim. Almost all tools out there support cancellations, and they're mostly either Python or JS.
pjmlp 16 hours ago [-]
Plus if one really needs more performance than V8 can delivery, I rather write a native module in C++/Rust than reach out to Go.
pantsforbirds 2 days ago [-]
I've been messing around with an Elixir + BEAM based agent framework. I think a mixture of BEAM + SQLite is about as good as you can get for agents right now.
You can safely swap out agents without redeploying the application, the concurrency is way below the scale BEAM was built for, and creating stateful or ephemeral agents is incredibly easy.
My plan is to set up a base agent in Python, Typescript, and Rust using MCP servers to allow users to write more complex agents in their preferred programming language too.
nilslice 1 days ago [-]
you should check out the Extism[0] project and the Elixir SDK[1]. This would allow you to write the core services, routing, message passing, etc in Elixir, and leverage all the BEAM/OTP have to offer, and then embed "agents" written in other languages which are small Wasm modules that act like in-process plugins.
Agents easily spend >90% of their time waiting for LLMs to reply and optionally executing API calls in other services (HTTP APIs and DBs).
In my experience the performance of the language runtime rarely matters.
If there ever was a language feature that matters for agent performance and scale, it's actually the performance of JSON serialization and deserialization.
fixprix 1 days ago [-]
Yep exactly, might as well use a language that works with JSON natively like TypeScript; which has arguably far more powerful type system than Go.
zveyaeyv3sfye 1 days ago [-]
> like TypeScript; which has arguably far more powerful type system than Go.
"arguably".
Typescript is just a thin wrapper over javascript who doesnt have these types at all.
hombre_fatal 18 hours ago [-]
Wait, you can't be saying that TypeScript doesn't have a much more powerful type system than Go.
AGDTs, mapped types, conditional types, template literal types, partial higher-kinded types, and real inference on top of all that.
It had one of the most fully loaded type systems out there while the Go team was asking for community examples of where generics might be useful because they're not sure it might be worth it.
Hasnep 22 hours ago [-]
And yet its types are turning complete
fritzo 1 days ago [-]
In my experience, the 2nd most costly function in agents (after LLM calls) is diffing/patching/merging asynchronous edits to resolve conflicts. Those conflict resolution operations can call out to low-level libraries, but they are still quite expensive optimization problems, compared to serialization etc.
energy123 1 days ago [-]
What diffing/patching/merging library are you working with? Or are you building your own?
autogn0me 1 days ago [-]
can you be more specific about this?
huqedato 2 days ago [-]
Following the article's logic, Elixir is a better fit for agents. Ideal I would say.
skybrian 2 days ago [-]
For long-running, expensive processes that do a lot of waiting, a downside is that if you kill the process running the goroutine, you lose all your work. It might be better to serialize state to a database while waiting? But this adds a lot of complexity and I don’t know any languages that make it easy to write this sort of checkpoint-based state machine.
abelanger 2 days ago [-]
OP here - this type of "checkpoint-based state machine" is exactly what platforms which offer durable execution primitives like Hatchet (https://hatchet.run/) and Temporal (https://temporal.io/) are offering. Disclaimer: am a founder of Hatchet.
These platforms store an event history of the functions which have run as part of the same workflow, and automatically replay those when your function gets interrupted.
I imagine synchronizing memory contents at the language level would be much more overhead than synchronizing at the output level.
tptacek 2 days ago [-]
This is also how our orchestrator (written in Go) is structured. JP describes it pretty well here (it's a durable log implemented with BoltDB).
Nice! It makes a lot of sense for orchestrating infra deployments -- we also started exploring Temporal at my previous startup for many of the same reasons, though at one level higher to orchestrate deployment into cloud providers.
lifty 1 days ago [-]
What are the main differences between temporal and hatchet?
abelanger 20 hours ago [-]
The primary difference is that Hatchet is an all-purpose platform for async jobs, so while durable execution is a pattern that we support, we have a lot of other features like concurrency and fairness control, event ingestion, custom queues, dynamic rate limiting, streaming from a background job, monitoring, alerting, DAG-based executions, etc. There's a bit more on this/our architecture here: https://news.ycombinator.com/item?id=43572733.
The reason I started working on Hatchet was because I'm a huge advocate of durable execution, but didn't enjoy using Temporal. So we try to make the development experience as good as possible.
On the underlying durable execution layer, it's the exact same core feature set.
skybrian 2 days ago [-]
Yep, though I haven’t used them, I’m vaguely aware that such things exist. I think they have a long way to go to become mainstream, though? Typical Go code isn’t written to be replayable like that.
abelanger 2 days ago [-]
I think there's a gap between people familiar with durable execution and those who use it in practice; it comes with a lot of overhead.
Adding a durable boundary (via a task queue) in between steps is typically the first step, because you at least get persistence and retries, and for a lot of apps that's enough. It's usually where we recommend people start with Hatchet, since it's just a matter of adding a simple wrapper or declaration on top of the existing code.
Durable execution is often the third evolution of your system (after the first pass with no durability, then adding a durable boundary).
sorentwo 2 days ago [-]
That's the issue with goroutines, threads, or any long running chain of processes. The tasks must be broken up into atomic chunks, and the state has to be serialized in some way. That allows failures to be retried, errors to be examined, results to be referenced later, and the whole thing to be distributed between multiple nodes.
It must in my view at least, as that's how Oban (https://github.com/oban-bg/oban) in Elixir models this kind of problem. Full disclosure, I'm an author and maintainer of the project.
I actually working on an agent library in golang and this is exactly the thought process I've come up with. If we have comprehensive logging we can actual reconstruct the agents state at any position. Allowing for replays etc. You just need the timestamp(endpoint) and the parent run and you can build children/branched runs after that.
Through the use of both a map that holds a context tree and a database we can purge old sessions and then reconstruct them from the database when needed (for instance an async agent session with user input required).
We also don't have to hold individual objects for the agents/workflows/tools we just make them stateless in a map and can refernce the pointers through an id as needed. Then we have a stateful object that holds the previous actions/steps/"context".
To make sure the agents/workflows are consistent we can hash the output agent/workflow (as these are serializable in my system)
I have only implemented basic Agent/tools though and the logging/reconstruction/cancellation logic has not actually been done yet.
jpk 1 days ago [-]
Just a drive-by thought, but: What you're describing sounds a lot like Temporal.io. I guess the difference is the "workflow" of an agent might take different paths depending on what it was asked to accomplish and the approach it ends up taking to get there, and that's what you're interested in persisting, replaying, etc. Whereas a Temporal workflow is typically a more rigid thing, akin to writing a state machine that models a business process -- but all the challenges around persistence, replay, etc, sound similar.
Edit: Heh, I noticed after writing this that some sibling comments also mention Temporal.
Karrot_Kream 2 days ago [-]
Temporal is pretty decent at checkpointing long-running processes and is language agnostic.
trevinhofmann 1 days ago [-]
I've been considering good ways to use a task queue for this, and might just settle for a rudimentary one in a Postgres table.
The upside is that agent subtasks can be load balanced among servers, tasks won't be dropped if the process is killed, and better observability comes along with it.
The downside is definitely complexity. I'm having a hard time planning out an architecture that doesn't significantly increase the complexity of my agent code.
ashishb 2 days ago [-]
> For long-running, expensive processes that do a lot of waiting, a downside is that if you kill the goroutine, you lose all your work.
This is true regardless of the language.
I always do a reasonable amount of work (milliseconds to up to a few seconds) worth of work in a Go routine every time. Anything more and your web service is not as stateless as it should be.
odyssey7 2 days ago [-]
AI engineers will literally invent a new universe before they touch JavaScript.
The death knell for variety in AI languages was when Google rug-pulled TensorFlow for Swift.
dpe82 2 days ago [-]
Avoiding JavaScript like the plague that it is, is not unique to AI engineers.
-Someone who has written a ton of JS over the past... almost 30 years now.
dpkirchner 1 days ago [-]
Choosing Python over JavaScript is one of the more perplexing decisions I've seen.
jpk 1 days ago [-]
It's not so perplexing when you understand that Python has long had the best ecosystem of libraries for data science and ML, from which the current wave of AI stuff was born. There are plenty of reasons to dunk on Python, but the reality is lots of people were getting real work done with it in the run up to where we are today.
odyssey7 21 hours ago [-]
There are choices at multiple levels.
Yes, today’s ML engineer has practically no choice but to use Python, in a variety of settings, if they want to be able to work with others, access the labor market without it being an uphill battle, and most especially if they want to study AI / ML at a university.
But there were also the choices to initially build out that ecosystem in Python and to always teach AI / ML in Python. They made sense logistically, since universities largely only teach Python, so it was a lowest-common-denominator language that allowed the universities to give AI / ML research opportunities to everyone, with absolutely no gatekeeping and with a steadfast spirit of friendly inclusion (sorry, couldn’t resist the sarcastic tangent). I can’t blame them for working with what they had.
But now that the techniques have grown up and graduated to form multibillion-dollar companies, I’m hopeful that industry will take up the mantle to develop an ecosystem that’s better suited for production and for modern software engineering.
vovavili 17 hours ago [-]
When it comes to modern Python, the only thing that can make it not production-ready is it being slow. Given that people in machine learning are using Python as a glue language for AI/ML libraries, this negligibly impacts their workflow.
lou1306 20 hours ago [-]
How good is JS interop with C/C++/BLAS? That's the basic stepping stone, I think. If you cannot make something in JavaScript that can compete with numpy there's little chance that things will change anytime soon.
odyssey7 15 hours ago [-]
I don’t know the details as specifically, since I haven’t been able to justify investing my efforts in the non-flagship ecosystem within the TensorFlow project after it previously added its Swift version to the Google Graveyard, but TensorFlow.js is doing something in this direction for the Node.js version. This info is at: https://www.tensorflow.org/js/guide/nodejs
“Like the CPU package, the module is accelerated by the TensorFlow C binary. But the GPU package runs tensor operations on the GPU with CUDA.”
They note that these operations are synchronous, so using them will sacrifice some of JavaScript’s effectiveness at asynchronous event processing. This is not different from Python when you are training or serving a model. JavaScript’s strengths would shine brighter when coordinating agents / building systems that coordinate models.
dpe82 10 hours ago [-]
Oh yeah. Personally I also try to avoid Python but as the rest of this thread covers it's pretty deeply rooted in ML/AI so I think we're stuck with it - at least for a while.
rednafi 1 days ago [-]
This is the way.
JS is a terrible language to begin with, and bringing it to the backend was a mistake. TS doesn’t change the fact that the underlying language is still a pile of crap.
So, like many, I’ll write anything—Go, Rust, Python, Ruby, Elixir, F#—before touching JS or TS with a ten-foot pole.
mkfs 14 hours ago [-]
> Python, Ruby
It's 2025, Node.js has been around since 2009, yet these languages' still use C-based interpreters by default, and their non-standard JIT alternatives are still much worse than V8.
trevinhofmann 1 days ago [-]
This doesn't contribute much to the discussion.
Use whatever language works well for you and the task at hand, but many enjoy fullstack JS/TS.
koakuma-chan 1 days ago [-]
You think Python is a better language than TS?
rednafi 1 days ago [-]
Anything at this point.
koakuma-chan 10 hours ago [-]
Python is terrible in the age of LLMs because type checking doesn't work properly.
rednafi 3 hours ago [-]
Python is basically the only language that’s used to train the models.
Sure, the libs are mostly written in C/C++, but all of them have first-class support for Python and oftentimes Python only. Serving the model is a different story and you can use whatever language to do so.
As someone who has worked in the DS realm for an extended period of time, I can tell you Python has practically zero competition when it comes to data wrangling and training models. There are plenty of contenders when it comes to serving the models or building “agents.”
As for type checking, yeah, it sucks big time. TS is a much better type system than the bolted-on hints in Python. But it’s still JS at the end of the day. All the power of V8, a zillion other runtimes, and TS gets marred by a terribly designed language.
1 days ago [-]
kweingar 2 days ago [-]
Why is JS particularly good for agents?
tinrab 1 days ago [-]
I'd say TypeScript is currently the best choice for agents. For one, MCP tooling is really solid, the language itself is easy, fast to develop in, and not esoteric.
odyssey7 1 days ago [-]
The same reason it’s good for web servers. It excels at even-driven applications.
arthurcolle 1 days ago [-]
This is a specious argument. It's event-driven because it has callbacks?
odyssey7 22 hours ago [-]
“Specious” meaning what exactly?
EGreg 2 days ago [-]
Because it integrates great with browsers and people know the language already for node.js and the packages in npm can work for both?
wild_egg 2 days ago [-]
A uniform language and ecosystem has been the siren song of JS for over a decade and I've yet to see it work out in any meaningful way.
Use whatever you like.
EGreg 2 days ago [-]
I mean, what else do you use to run things in the browser?
Pouchdb. Hypercore (pear). It’s nice to be able to spin up JS versions of things and have them “just work” in the most widely deployed platform in the world.
TensorflowJS was awesome for years, with things like blazeface, readyplayer me avatars and hallway tile and other models working in realtime at the edge. Before chatgpt was even conceived. What’s your solution, transpile Go into wasm?
Agents can work in people’s browsers as well as node.js around the world. Being inside a browser gives a great sandbox, and it’s private on the person’s own machine too.
> what else do you use to run things in the browser?
I do my best to run as little in the browser as possible. Everything is an order of magnitude simpler and faster to build if you do the bulk of things on a server in a language of your choice and render to the browser as necessary.
kweingar 1 days ago [-]
I was wondering if there was something particular about AI, but that's just the standard reason people give to use JS for anything.
guywithahat 2 days ago [-]
I wish we had better concurrency models in the ML world. I tried doing some ML in Go a few months back and it's basically impossible; there's just no library support and doing anything requires a gRPC call or a wrapper. Python has limitations and C++ has a tendency to make everything too verbose.
Some of the things are just more natural in python being a dynamic language. Eg decorator to quickly convert methods into tool calls, iterating over tool functions to create list of tools, packages to quickly convert them into json schema etc.
Consuming many incoming triggers, eg from user input as well as incoming emails from gmail, or messages from slack which would trigger new agent run was lot more natural in go with channels and switch for loop vs in python where had to create many queues and threading etc
flanked-evergl 1 days ago [-]
Go's quite horrendous and limited type system makes it a poor fit for everything. The worst thing about Go is, in fact, the language. Everything except the language redeems it.
Luker88 21 hours ago [-]
I have been programming in Go for several years now and I agree, though I am not that sure even the rest of the ecosystem redeems it that much.
On the other hand, the programming languages used by LLM people seem to be python and javascript mainly.
So while I argue that they all should really move on to modern languages, I think go is still better than the I-can't-even-install-this mess of python and javascript imports without even a Dockerfile that seem to be so prevalent in LLM projects.
giik 1 days ago [-]
Can you elaborate a bit on how does "Go's quite horrendous and limited type system" get in the way of crafting agents?
Honest question, I am genuinely interested in what cannot be done easily or at all due to limitations of the Go type system.
Luker88 20 hours ago [-]
If you want to know only about the type system, nowadays it's mostly the lack of basic enums, a clear divide in basic features of the language and of the libraries (and modern generics support) leading to things like `len(..)` vs `.Len()`. Those actually end up playing a bigger role than it seems imho, but even just the rest is death by a thousand cuts.
You can find many articles on the internet about it, but in my experience I would summarize it in:
It looks like it's made to have a simple compiler, not to simplify the programmer's life.
Initially its simplicity is wonderful. Then you start to notice how verbose things are. Channels are another looks-nice-but-maybe-don't feature. nil vs nil-interface. Lack of proper enums is hurting so much I can't describe it. I personally hate automatic type conversions, and there are so many inconsistencies in the standard and most used libraries that you really start to wonder why some things where even done. validators that validate nothing, half-done tagging systems for structs, tons of similar-but-not-quite interfaces and methods.
It's like the language has learning wheels that you can't shake off or work around. You end up wanting to leave for a better one.
People had to beg for years for basic generics and small features. If google is not interested in it, you'd better not be interested in it and it shows after a while.
Companies started to use it as an alternative to C and C++, while in reality it's an alternative to python. Just like in python a lot of the work and warnings are tied into the linter as a clear workaround. Our linter config has something like 70+ linters classes enabled, and we are a very small team.
C can be described as a relatively simple language (with caveats), C++ has grown to a blob that does and has everything, and while they have lots of footguns I did not find the same level of frustration as with go. You always end up fighting a lot of corner cases everywhere.
Wanted to say even more, but I think I ranted enough.
9rx 19 hours ago [-]
> Lack of proper enums is hurting so much I can't describe it.
Do you mean sum types? That is not a case of them not being "proper", though. They simply do not exist as a feature at all.
Go's enums function pretty much like enums in every single other language under the sun. If anything, Go enums are more advanced than most languages, allowing things like bit shifts. But at the heart of it all, it's all just the same. Here are enum implementations in both Go and Rust:
While Go leans on the enum value produced by `range` to act as the language's enumerator, while Rust performs explicit incrementing to produce the enumerator, the outcome is no different — effectively nothing more than [n=0, n++]. Which stands to reason as that's literally, as echoed by the dictionary, what an enum is.
unscaled 17 hours ago [-]
Go doesn't even classic type-safe integer-value enums like in C++ or enums.
Yes, you can emulate this style of enums by using iota to start a self-incrementing list of integer constants. But that's not what any language (except for C) has ever meant by "enum".
Enums are generally assumed to be type-safe and namespaced. But in Go, they are neither:
type Color int
const (
Red Color = iota
Green
Blue
)
func show(color Color) {
fmt.Printf("State: %v", color)
}
fun main() {
show(Red)
show(6)
}
There is no namespacing, no way to — well — enumerate all the members of the enum, no way to convert the enum value to or from a string (without code-genreation tools like stringer), and the worst "feature" of all is that enums are just integers that can freely receive incorrect values.
If you want to admire a cool hack that you can show off to your friends, then yeah, iota is a pretty neat trick. But as a language feature it's just a ugly and awkward footgun. Being able to auto-increment powers of two is a very small consolation prize for all of that (and something you can easily achieve in Rust anyway with any[1] number[2] of crates[3]).
> Go doesn't even classic type-safe integer-value enums like in C++ or enums.
Sure, but now you're getting into the topic of types. Enums produce values. Besides, Go isn't really even intended to be a statically-typed language in the first place. It was explicitly told when it was released that they wanted it to be like a dynamically-typed language, but with statically-typed performance.
If you want to have an honest conversation, what other dynamically-typed languages support type-safe "enums"?
> But that's not what any language (except for C) has ever meant by "enum".
Except all the others. Why would a enum when used when looping over an array have a completely different definition? It wouldn't, of course. Enums are called what they are in a language because they actually use enums in the implementation, as highlighted in both the Go and Rust codebases above.
Many languages couple enums with sum types to greater effect, but certainly not all. C is one, but even Typescript, arguably the most type-intensive language in common use, also went with "raw" enums like Go.
Luker88 17 hours ago [-]
It's not about 'range', and like you said enum and sum types are tied concepts in other languages, and yes I was talking about sum types.
Even without sum types, there is a common pattern of defining a new type and const-defining the possible values that is a clear workaround on the lack of an 'enum' keyword.
Maybe because the compiler can't be sure that those const values are all the possible values of the type, we can't have things like enforcing exhaustive switches on this "enum", and that is left to the linter at best.
Default-zero initialization is always valid too, which can leave you with an "enum" value that is not present in the const definitions (not everything starts on iota, iota does not mean 0).
It's a hack, it became a pattern. It still is not a proper (or even basic) enum even without sum types.
9rx 17 hours ago [-]
> It's not about 'range'
It is to the extent that it helps explain what an enum is, and why we call the language feature what we do. Python makes this even more apparent as you explicitly have to call out that you want the enum instead of it always being there like in Go:
for i, v in enumerate(array):
# ...
In case I'm not being clear, an array enumerator like in the above code is not the same as a language enumerator, but an array enumerator (or something similar in concept) is how language enumerators are implemented. That is why language enumerators got the name they did.
> It still is not a proper (or even basic) enum even without sum types.
It most certainly is "proper". In fact, you could argue that most other languages are the ones that are lacking. Go's enums support things like bit shifts, which is unusual in other languages. Perhaps it is those other languages that aren't "proper"?
But, to be sure, it's not sum types. That is certain. If you want sum types you are going to have to look elsewhere. Go made it quite clear from the beginning that it wanted to be a "dynamically-typed language with statically-typed performance", accepting minimal static type capability in order to support the performance need.
There is definitely a place for languages with more advanced type systems, but there are already plenty of them! Many considerably older than Go. Haskell has decades on Go. Go was decidedly created to fill in the niche of "Python, but faster", which wasn't well served at the time. Creating another Haskell would have been silly and pointless; but another addition to the long list of obscure languages serving no purpose.
ojosilva 16 hours ago [-]
> Companies started to use it as an alternative to C and C++, while in reality it's an alternative to python. Just like in python a lot of the work and warnings are tied into the linter as a clear workaround. Our linter config has something like 70+ linters classes enabled, and we are a very small team.
I thought the main "let's migrate our codebase to Go" crowd had always been from the Java folks, especially the enterprise ones. Any C/C++ code that is performant is about to get a hit, albeit small, from migrating to a GC-based runtime like Go, so I'd think that could be a put off for any critical realtime stuff - where Rust can be a much better target. And, true for both C++ and Java codebases, they also might have to undergo (sic) a major redux at the type/class level.
But yes, the Googlers behind Go were frustrated by C++ compile times, tooling warts, the 0x standard proposal and concurrency control issues - and that was primal for them, as they wanted to write network-server software that was tidy and fast [1]. Java was a secondary (but important) huge beast they wanted to tackle internally, IIRC. Java was then the primary language Googlers were using on the server... Today apparently most of their cloud stuff is written in Go.
Well, there's a difference between "our program is written in C++ because we correctly chose it for its performance" and "our program happens to be written in C++ because some programmer 10 years ago really liked C++".
There's a lot of software out there that either was written before good modern options existed, or uses very outdated patterns, or its language wasn't chosen with much thought.
20 hours ago [-]
zveyaeyv3sfye 1 days ago [-]
It's just a uninformed hivemind comment written by someone lacking original thought.
If you are interested in the merits of golang, you should listen to someone who uses it.
flanked-evergl 1 days ago [-]
I used it for years.
williamdclt 23 hours ago [-]
I think the point was that "Go's quite horrendous and limited type system" gets in the way of everything (programming in general), nothing specific to crafting agents.
There's a lot of discussions on the internet about the bad design decisions of Golang (for example around channels, enums, error handling, redeclarations, interfaces, zero values, nilability... at least generics aren't so much a subject anymore)
9rx 15 hours ago [-]
To be fair, it is unlikely that Go would have a static type system at all if they had figured out how to achieve the performance expectations without. It was made abundantly clear that it is intended to be like a dynamically-typed language, but faster. Thinking of it as being a statically-typed language is a bit flawed, and shows a gross misunderstanding of what the language was created for.
While you could try to argue that dynamically-typed languages in general are a poor fit for everything, the reality is that people there are typically using Python instead – and the best alternative suggestions beyond Go are Erlang and Elixer, which are also dynamically typed, so that idea doesn't work. Dynamic typing is what clearly fits the problem domain.
bbkane 1 days ago [-]
I agree- multiple return values don't compose; errors are better than exceptions, but still super verbose; channels have a lot of foot guns; enums are just sad.
But despite all of that the language has some really good qualities too- interfaces works far better than it feels like they should; the packaging fits together really well (I'm learning Rust right now and the file structure is far more complicated); and people are able to write a lot of linters/codegen tools BECAUSE the language is so simple.
All in all I worry the least about the long term maintenance cost of my Go code, especially compared to my Python or JS code.
9rx 24 hours ago [-]
> enums are just sad.
There isn't much more you can do with them. Literally all an enum can produce is a number.
In increasingly common use in a number of languages, enums are being coupled with discriminated unions, using the enum value as the discriminant. This is probably what you're really thinking of, noticing Go's lack of unions.
But where you might use a discriminated union if it were available, you would currently use an interface, where the type name, rather than an enum, is what differentiates the different types. If Go were to gain something like a discriminated union, it is likely it would want to extend upon that idea. Most especially given that generics already introduced syntax that would lend itself fairly well to it:
type Foo interface {
Bar | Baz
}
Where enums are used to act as a discriminant in other languages, that is largely just an implementation detail that nobody really cares about. In fact, since you mentioned Rust, there are only 8,000 results on Github for `std::mem::discriminant` in Rust code, which is basically nothing. That is quite indicative that wanting to use an enum (in Rust, at least) is a rare edge case. Funny, given how concerned Rust users seem to be about enums. Talk is cheap, I suppose.
kbolino 19 hours ago [-]
Java has true enums that are neither fancy integers nor discriminated unions. The following is not a list of integers:
public enum Day {
SUNDAY, MONDAY, TUESDAY, WEDNESDAY,
THURSDAY, FRIDAY, SATURDAY
}
To use this enum, you typically declare a variable of type Day, which is a subclass of Enum, itself a subclass of Object, which cannot be cast to or from int. If a variable is typed as Day, then it can only take one of these variants (or null). Even though the Day class does have an ordinal() method, and you can look up the variants by ordinal, you cannot represent Day(7) or Day(-1) in any way, shape, or form. This sealed set of variants is guaranteed by the language and runtime (*). Each variant, like SUNDAY, is an instance of class Day, and not a mere integer. You can attach additional methods to the Day class, and those methods do not need to anticipate any other variants than the ones you define. Indeed, enums are sometimes used with a single variant, typically called INSTANCE, to make true singletons.
* = There is a caveat here, which is that the sealed set of variants can differ between compile-time (what's in a .java file) and runtime (what's in a .class file) but this only happens when you mismatch your dependency versions. Rather importantly, the resolution of enum variants by the classloader is based on their name and not their ordinal, so even if the runtime class differs from the compile-time source, Day.MONDAY will never be turned into a differently named variant.
9rx 19 hours ago [-]
> The following is not a list of integers
Then I am not sure how you think it is an enum? What defines an enum, literally by dictionary definition, is numbering.
It is hilarious to me that when enum is used in the context of looping over an array, everyone understands that it represents the index of the element. But when it comes to an enum in a language, all of a sudden some start to think it is magically something else? But the whole reason it is called an enum is because it is produced by the order index of an AST/other intermediate representation node. The very same thing!
While I haven't looked closely at how Java implements the feature of which you speak, I'd be surprised if it isn't more or less the same as how Rust does it under the hood. As in using a union with an enumerator producing the discriminant. In this case it would be a tag-only union, but that distinction is of little consequence for the purposes of this discussion. That there is an `ordinal` method pretty much confirms that suspicion (and defies your claim).
> you cannot represent Day(7) or Day(-1) in any way, shape, or form.
While that is true, that's a feature of the type system. This is a half-assed attempt at sum types. Enums, on the other hand, are values. An enum is conceptually the same as you manually typing 1, 2, 3, ... as constants, except the compiler generates the numbers for you automatically, which is what is happening in your example. The enum is returned by `ordinal`, like you said. Same as calling std::mem::discriminant in Rust like we already discussed in a sibling thread.
kbolino 18 hours ago [-]
The existence of the ordinal method reveals nothing except that the ordinal exists. It can be (and is) simply a field on each Day object, not an index into anything (though the Day objects are probably stored in an array, this is not required by any property of the system). Day.SUNDAY is ultimately a pointer, not an int. It is also a symbolically resolved pointer, so it will never become Day.MONDAY even if I reorder the variants so that their ordinals are swapped. The ordinal is not a discriminant.
You seem to be trivializing the type system. This property is not imagined solely by the compiler, it is carried through the language and runtime and cannot be violated (outside of bugs or unsafe code). Go has nothing like this.
If you choose to call this "not an enum", that is certainly your idiosyncratic prerogative, but that doesn't make for very interesting discussion. Even though I agree that discriminated unions aren't enums and am somewhat annoyed by Rust's overloading of the term, this is not that.
9rx 18 hours ago [-]
> The existence of the ordinal method reveals nothing except that the ordinal exists.
It strongly suggests that implementation is a discriminated union, just like Rust. Again, it is tag-only in this case, where Rust allows also attaching payload, but that's still a type of discriminated union. That it is a set of integers – contrary to the claim made earlier – combined with you explaining how the compiler treats it like a discriminated union — as in that there are type checks against the union state, that does reveal that it couldn't be anything other than a discriminated union that is effectively identical to what we find in Rust, along with many other languages these days.
> It can be (and is) simply a field on each Day object, not an index into anything
So...? Enum is not staunch in exactly where the number comes from; it simply needs to number something. Indices are convenient, though, and I am not sure why you would use anything else? That doesn't necessarily mean the index will start where you think it should, of course.
For example,
enum Foo { A, B, C }
enum Bar { X, Y, Z }
In some languages, the indices might "reset" for each enum [A=0, B=1, C=2, X=0, Y=1, Z=2], while in other languages it might "count from the top" [A=0, B=1, C=2, X=3, Y=4, Z=5]. But, meaningless differences aside, where else is the number going to come from? Using a random number generator would be silly.
But, humour us, how does Java produce its enums and why doesn't it use indices for that? Moreover, why did they choose to use the word `ordinal` for the method name when that literally expresses that it is the positional index?
kbolino 18 hours ago [-]
Setting aside the full enum API, as well as certain optimizations, this is a rough equivalent of the enum I gave:
public class Day extends Enum<Day> {
private int _ordinal;
private Day(int ordinal) { this._ordinal = ordinal; }
public int ordinal() { return this._ordinal; }
public static final Day SUNDAY = new Day(0);
// ...
public static final Day SATURDAY = new Day(6);
}
with the added constraint that the Day constructor cannot be invoked by reflection, and the static instances shown herein can be used in a switch statement (which may reduce them to their ordinals to simplify the jump table). Each instance is ultimately a pointer, so yes it could be pulled from a sort of RNG (the allocator). As I said they are probably in an array, so it's likely that the addresses of each variant start from some semi-random base but then increase by a fixed amount (the size of a Day object). A variable of type Day stores the pointer, not the ordinal.
Now, it really seems to be in the weeds of pedantry when you start talking about discriminated unions that have only discriminants and no payload. Taking from your examples, the key point is that a Foo is not a Bar and is also not an int. Regardless of whether the variants are distinct or overlapping in their ordinals, they are not interchangeable with each other or with machine-sized integers.
9rx 18 hours ago [-]
> this is a rough equivalent of the enum I gave
Yes, this echos what I stated earlier: "An enum is conceptually the same as you manually typing 1, 2, 3, ... as constants, except the compiler generates the numbers for you automatically" Nice to see that your understanding is growing.
> Taking from your examples, the key point is that a Foo is not a Bar.
I'm not sure that's a useful point. Nobody thinks
class Foo {}
class Bar {}
...are treated as being the same in Java, or, well any language that allows defining types of that nature. That is even the case in Go!
type Foo int
type Bar int
const f Foo = iota
const b Bar = f // compiler error on mismatched types
But what is significant to the discussion about enums is the value that drives the inner union of the class. As in, the numbers that rest beneath SUNDAY, MONDAY, TUESDAY, etc. That's the enum portion.
kbolino 18 hours ago [-]
I don't understand anything more now than I did at the start. It is clear we are talking past each other.
The values of Day are {SUNDAY, ..., SATURDAY} not {0, ..., 6}. We can, of course, establish a 1:1 mapping between those two sets, and the API provides a convenient forward mapping through the ordinal method and a somewhat less convenient reverse mapping through the values static method. However, at runtime, instances of Day are pointers not numbers, and ints outside the range [0, 6] will never be returned by the ordinal method and will cause IndexOutOfBoundsException if used like Day.values()[ordinal].
Tying back to purpose of this thread, Go cannot deliver the same guarantee. Even if we define
type Day int
const (
Sunday Day = iota
// ...
Saturday
)
then we can always construct Day(-1) or Day(7) and we must consider them in a switch statement. It is also trivial to cast to another "enum" type in Go, even if the variant doesn't exist on the other side. This sealed, nonconvertible nature of Java enums makes them "true" enums, which you can call tag-only discriminated unions or whatever if you want, but no such thing exists in Go. In fact, it is not even possible to directly adapt the Java approach, since sealed types of any kind, including structs, are impossible thanks to new(T) being allowed for all types T.
9rx 18 hours ago [-]
> This sealed, nonconvertible nature of Java enums makes them "true" enums, which you can call tag-only discriminated unions or whatever if you want, and no such thing exists in Go.
It is no secret that Go has a limited type system. In fact, upon release it was explicitly stated that their goal was for it to be a "dynamically-typed language with statically-typed performance", meaning that what limited type system it does have there only to support the performance goals. You'd have to be completely out to lunch while also living under a rock to think that Go has "advanced" types.
But, as before, enums are values. It is not clear why you want to keep going back to talking about type systems. That is an entirely different subject. It may be an interesting one, but off-topic as it pertains to this discussion specifically about enums, and especially not useful when talking in the context of Go which it isn't really intended to be a statically-typed language in the first place.
LtdJorge 22 hours ago [-]
#[repr(u8)]
enum Discriminant {
Disc0 = 0,
Disc1 = 1,
…
}
9rx 20 hours ago [-]
Funny enough, manually defining the discriminant disables the enumerator:
enum Discriminant1 {
Disc0,
Disc1,
}
#[repr(u8)]
enum Discriminant2 {
Disc0 = 10,
Disc1 = 20
}
fn main() {
let d1 = Discriminant1::Disc1;
let d2 = Discriminant2::Disc1;
println!("{:?}", std::mem::discriminant(&d1)); // Value by enumerator.
println!("{:?}", std::mem::discriminant(&d2)); // Value by constant.
}
Which makes the use of the enum keyword particularly bizarre given that there is no longer even an enumerator involved, but I suppose bizarre inconsistencies are par for the course in Rust.
And because it has been used like that in C for decades, the dictionary definition takes a backseat to the now de-facto C-based definition (at least for popular systems languages, which Rust is trying to share as much syntax with).
9rx 13 hours ago [-]
> Rust takes it straight from C
Meaning the keyword? Sure, C has the same inconsistency if you disable the enumerator with manual constant values. C is not exactly the paragon of thoughtful design. But whataboutism is a dumb path to go down.
> the dictionary definition takes a backseat to the now de-facto C-based definition
That's clearly not the case, though, as the functionality offered by the Rust enum keyword is very different. It puts absolutely no effort into being anything like C. Instead, it uses enum as the keyword for defining sum types. The C enum keyword, on the other hand, does nothing but define constants, and is functionally identical to what Go has. There is an enum involved in both cases, as demonstrated earlier, so the terminology isn't strictly wrong (in the usual case) but the reason for it existing shares little commonality.
But maybe you've moved onto the concept of enums rather than syntax and I didn't notice? You are right that the dictionary definition is in line with the intent of the C keyword, which speaks to the implementation, and is how C, Rust, Go, and every other language out there use the terminology. In another comment I even linked to the implementation in both Go and Rust and you can see that the implementation is conceptually the same in both cases: https://news.ycombinator.com/item?id=44236666
rcarmo 22 hours ago [-]
I wouldn't mind a well-maintained LISP/Scheme dialect that compiled to Go.
Zambyte 19 hours ago [-]
Definitely not well maintained, but it's interesting to see that something like that came out of SteelSeries:
I don't think I agree with the Go is good in LLMs.
But outside of that - ML in go is basically impossible. Trying to integrate with the outside ecosystem of go is really difficult - and my experience has been that Claude Code is far less effective with Go then it is with Python, or even Swift.
I ditched a project I was writing in Go and replaced it with Swift (this was mostly prompt based anyways). It was remarkably how much better the first pass of the code generation was.
hoppp 2 days ago [-]
What if... hear me out... You learn to write code instead of generating it...there is a drastic improvement in code quality if you can actually write it.
jillesvangurp 1 days ago [-]
Go isn't horrible for this stuff. But I don't think it's notably better than a lot of other languages either.
Frankly, anything that has a compiler and supports doing asynchronous stuff decently probably does the job. Which of course describes a wide range of languages. And since agents inherently involve a lot (some would say mostly) prompt engineering, it helps if the language is good at things like multi line strings, templated strings, and just generally manipulating strings.
As for the async stuff, it's nice if a language can do async things. But is that enough? Agentic systems essentially reach out to other systems over the network. Some of the tasks may be long lived. Minutes, hours, or even days. A lot can happen in such long time. IMHO the model of some system keeping all that state in a long running process is probably not ideal. We might want something more robust and long running and less dependent on a some stateful process running somewhere for days on end.
There is an argument to be made for externalizing related state from the language and maybe using some middleware optimized for this sort of thing. I've seen a few things that go in that direction but not a lot yet. It seems that people are still busy reinventing wheels and not fully realizing yet that a lot of those wheels don't need reinventing. There's a lot of middleware out there that is really great at async job scheduling, processing, fan out, and all the other stuff that people eventually will figure out is needed here.
carsoon 2 days ago [-]
I wrote the start to an agent library in Go. Its quite rough as most of it was implemented through using AI but I had a lot of ideas through planning/building it.
1. If you make your agents/workflows serializable you can run/load them from a config file or add/remove them from a decoupled frontend. You can also hash them to make versioning easy to track/immutable.
2. If you decouple the stateful object from the agent/workflow object you can just store that through sufficient logging then you can rebuild any flow at any state and have branching by allowing traces to build on one another. You can also restart/rerun a flow starting at any location.
3. You can allow for serializable tools by having a standard HttpRequestTool then setup cloudflare workers/any external endpoints for the actual toolcall logic. Removing primary server load and making it possible to add/remove tools without rebuilding/restarting.
Given this system in golang you can have a single server which supports tens of thousands of concurrent agent workflows.
The biggest problem is there isn't that many people who are working on it. So even if you can make agents 100x more efficient by running in Go it doesn't really matter if cost isn't the biggest factor for the final implementations.
The actual compute/server/running costs for big AI Agent implementation contracts is <1%, so making it 100x more efficient doesn't really matter.
When building a SaaS with a Go backend, it's nice to be able to have the option of the agents and workflows being in the same process. And being confident in the ability of that to scale well.
While it's true that Go lacks good ML libraries, for some this isn't too consequential if your app is primarily using Anthropic or OpenAI and a database that offers semantic or hybrid search for RAG. The ML is done elsewhere. Plus it could be that you can leverage MCP servers and at that point you're language agnostic.
Regarding the concurrency model approach with Go and agents, I initially baked a message based approach (a la the Actor model, with one goroutine per agent) into Dive Agents, but eventually found that this would be better implemented as another layer. So currently in Dive it's the user's choice on how to implement concurrency and whether to use messaging. But I anticipate building that back in as an optional layer.
bewestphal 1 days ago [-]
If you’re not using Python or Typescripts ecosystem then you spend a lot of time as a framework dev. This has a high opportunity cost when you can easily slap together agents and have products quickly nowadays.
rgavuliak 14 hours ago [-]
This reminds me of a talk named - Modern Data Science in Go. That line of thinking went nowhere and many arguments in that talk were misleading.
rpep 2 days ago [-]
In practice the library ecosystem is just way behind Python. Maybe after you’re trying to optimise once you’ve worked out how to do stuff, but even the Langchain Go port is wayyyyy behind.
kamikaz1k 2 days ago [-]
> High concurrency
> Share memory by communicating
> Centralized cancellation mechanism with context.Context
> Expansive standard library
> Profiling
> Bonus: LLMs are good at writing Go code
I think profiling is probably the lowest value good here, but would be willing to hear out stories of AI middleware applications that found value in that.
Cancelling tasks is probably the highest value good here, but I think the contending runtimes (TS/Python) all prefer using 3P libraries to handle this kind of stuff, so probably not the biggest deal.
Being able to write good Go code is pretty cool though; I don't write enough to make a judgement there.
eikenberry 2 days ago [-]
> Bonus: LLMs are good at writing Go code
Good at writing bad code. But most of the code in the wild is written by mid-level devs, without guidance and on short timelines.. i.e bad code. But this is a problem with all languages, not just Go.
hoppp 2 days ago [-]
Go is a good fit for many use-cases.
prats226 2 days ago [-]
So far bigger bottleneck I have found in writing agents is in scaling integrations and not the for loop for agent. Lack of libraries for go is a really big challenge.
TeeWEE 17 hours ago [-]
The main thing that lacks in go is auto OpenAOI generation from a golang func. You at least need reflection. It can be done. But not as easy as in python
jasonthorsness 2 days ago [-]
Go is great for command-line tools because of library support and fast-starting single-binaries. While most of the benefits in the article are also shared with JavaScript, I wonder if the CLI advantage will help and whether command-line agents will become a thing ("grepllm"?)
The language of agents doesn't matter much in the long run as it's just a thin shell of tool definitions and API calls to the backing LLM.
vergessenmir 2 days ago [-]
Go is great for concurrency. Not quite there for agent support. The problem isn't performance or message passing it's the agent middleware i.e logging, tracing, retries, configuration
You need a DSL either supported in the language or through configuration. These are features you get for free in python and secondly JavaScript. You have to write most of this yourself in go
the_arun 1 days ago [-]
I have same bunch of reasoning for Java. The concept of Notebooks won the hearts of developers I guess.
npalli 2 days ago [-]
Agents to do what? Take ML/AI, all the infra and tools are Python/C++ so what exactly is the Agent going to help you with Go? Many such domains- gaming, HFT, HPC, Scientific Computing, Systems, UX, Enterprise etc. etc. Seems it really helps Go's sweet spot - CLI's and Networking services.
tptacek 2 days ago [-]
Agents mostly don't run ML/AI code; they're a structured loop around LLM calls and exist mostly to give an LLM access to local tools in some application domain; think "reading my email for me" rather than "driving ML systems".
achileas 2 days ago [-]
That doesn't negate what OP was saying at all, the better support in Python isn't for "running ML/AI code" but in things like agent frameworks, observability tools, SDKs, etc. None of which directly run AI code but are still helpful/necessary and for the most part better represented (and supported) in the Python world, although that seems like it's slowly changing.
tptacek 2 days ago [-]
There are agent frameworks in most languages at this point so the question just comes down to "can you invoke tools for the problem you want to solve in that language". Yes, Python is really great at that. So is Go and Javascript.
I think I'd condense this out to "this is not a really important deciding factor in what language you choose for your agent". If you know you need something you can only get in Python, you'll write the agent in Python.
2 days ago [-]
paxys 1 days ago [-]
Every single feature of an "agent" they have described is just...generic software development. Writing loops. if/else statements to branch execution paths. Waiting on input. Spawning child processes and communicating with them. Running CPU-bound operations (like parsing).
So every discussion about the "best" programming language is really you telling the world about your favorite language.
Use Go. Use Python. Use JavaScript. Use whatever the hell else you want. They are all good enough for the job. If you are held back it won't be because of the language itself.
abelanger 1 days ago [-]
For an agent that executes locally, or an agent that doesn't execute very often, I'd agree it's arbitrary.
But programming languages make tradeoffs on those very paths (particularly spawning child processes and communicating with them, how underlying memory is accessed and modified, garbage collection).
Agents often involve a specific architecture that's useful for a language with powerful concurrency features. These features differentiate the language as you hit scale.
Not every language is equally suited to every task.
tolerance 2 days ago [-]
As a functionally-code-illiterate-vibe-coder, I can confirm that LLMs are good at writing Go code.
dpe82 2 days ago [-]
As a highly-literate developer with almost 30 years of experience, I can also confirm that LLMs are very good at writing Go.
crabmusket 1 days ago [-]
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt." – Rob Pike
This fits LLMs pretty well too it seems!
dpe82 13 hours ago [-]
That's my assumption too: it's a simple language with very good tooling that's used by reasonably serious projects with good code quality. As a result the LLMs have both high quality training data and the output is easier to get right.
Aperocky 1 days ago [-]
I tend to agree, however, be very careful about proliferating channels since it's about as easy to write them as Java devs write Factories and Managers.
mountainriver 1 days ago [-]
Rust is a way better option, not sure why it isn't being mentioned.
The issue with Go, is as soon as you need to do actual machine learning it falls down.
The issue with Python is that you often want concurrency in agents. Although this may be solved with Pythons new threading.
Why is Rust great? It interops very well with Python, so you can write any concurrent pieces into that and simply import it into py, without needing to sacrifice any ML work.
I'll be honest Go is a bit of an odd fit in the world of AI, and if thats the future I'm not sure Go has a big part to play outside of some infra stuff.
rednafi 1 days ago [-]
Not every post about Go needs to mention Rust. Rust has its niche and so does Go. Both kind of suck at AI.
LLM researchers care about neither since Rust comes with its own headache: learning curve, slow compilation, weak stdlib, and Go’s FFI story is just sad. It’s still Python or GTFO.
That said, Go is great to whip up “agents” since it’s a nicer language to write networking and glue code, which is what agents are. Other than a few niche groups, I’ve seen a lot more agents written in Go than in Rust.
mountainriver 1 days ago [-]
There is wayyyyy more ML activity in Rust than Go. Rust is actually becoming quite viable in the ML space.
Agents that don’t do machine learning rarely ever work, that’s the sad truth of the ecosystem.
solomatov 1 days ago [-]
Is anyone aware of a good llm orchestration libraries for go like langchain for Python and Typescript?
Dive orchestrates multi agent workflows in Go. Take a look and let me know what you think.
awinter-py 2 days ago [-]
oh my god these things run on a gpu don't they? they have nothing to do with golang? to the extent they run on a cpu they're heavy; we're not like solving the c10k problem with agents
achileas 2 days ago [-]
Agents don't typically, and any LLM they're calling is likely hosted remotely.
danenania 2 days ago [-]
I built Plandex[1] (open source CLI coding agent focused on large projects and tasks) in Go and I’ve been very happy with that decision.
Beneath all the jargon, it’s good to remember that an “agent” is ultimately just a bunch of http requests and streams that need to be coordinated—some serially and some concurrently. And while that sounds pretty simple at a high level, there are many subtle details to pay attention to if you want to make this kind of system robust and scalable. Timeouts, retries, cancellation, error handling, thread pools, thread safety, and so on.
This stuff is Go’s bread and butter. It’s exactly what it was designed for. It’s not going to get you an MVP quite as fast as node or python, but as the codebase grows and edge cases accumulate, the advantages of Go become more and more noticeable.
...could we just get Go's GREAT concurrency model and decent standard lib, but in a language that is less horrible than Go (like with decent type system, enums, expressions based grammar, pattern matching etc etc)?
pretty please :P
we all yearn for a good static language, and most of us would kill for "something like Rust (good type system, syntax, tools) but without ownership / linear-typing - just a good GC, all-on-the-heap and a dash of nice immutable datastructs"...
Bgd1 20 hours ago [-]
Scala?
dragochat 3 hours ago [-]
good point, might be worth a revisit
crawshaw 2 days ago [-]
We have been having good luck writing Go with an agent. Sketch is mostly written with itself. (There is special Go handling built into the agent, e.g. automatically running gofmt/goimports after files change.) https://github.com/boldsoftware/sketch
The AI landscape moves so fast, and this conservative, backwards looking mindset of the new Go dev team doesn't match the forward looking LLM engineering mindset.
sorentwo 2 days ago [-]
Absolutely!
Elixir's lightweight processes and distribution story make it ideal for orchestration, and that includes orchestrating LLMs.
Unlike hatchet, it actually runs locally, in your own application as well.
jerf 2 days ago [-]
If it had a larger learning base, quite possibly.
Erlang possibly even more so. The argument that pure code is generally safer to vibe code is compelling to me. (Elixir's purity is rather complicated to describe, Erlang's much more obvious and clear.) It's easier to analyze that this bit of code doesn't reach out and break something else along the way.
Though it would be nice to have a language popular enough for the LLMs to work well on, that was pure, but that was also fast. At the moment writing in pure code means taking a fairly substantial performance hit, and I'm not talking about the O(n log n) algorithm slowdowns, I mean just normal performance.
guywithahat 2 days ago [-]
Well elixir doesn't produce goroutines (managed threads), they produce "lightweight processes" which have isolated memory. These are more expensive to grow and aren't as easy to share data between one another, although they're much more fault tolerant as a result. It could be better however the underlying concurrency model in Elixir is relatively unique
regularfry 2 days ago [-]
It's been a while since I was in the weeds on this, but if I remember correctly they're strictly speaking mostly isolated. Binaries above a certain size share storage between processes, so moving big blobs between processes is cheap.
guywithahat 2 days ago [-]
They also have their own system to share data between processes, although I haven't used it. Generally though it's a unique tool that's not always interchangeable with Go
Funnily, it's also one of the reasons I stay with Go.
Error handling is the most contraversial Go topic, with half the people saying it's terrible and needs a new syntax, and half saying it's perfect and adding any more syntax will ruin it.
kamikaz1k 2 days ago [-]
by your logic, only considering part of the argument is as good as considering the entire argument.
jclulow 2 days ago [-]
I don't use either Go or LLMs, but isn't the point of LLMs that they write the tedious boilerplate for you? What's the value in a small syntactic improvement if the computer is generating it all anyway?
behnamoh 2 days ago [-]
Elixir's concurrency model is fundamentally different than Go's; it's not just syntax difference.
tptacek 2 days ago [-]
Elixir is great for agents.
achileas 2 days ago [-]
This makes me want to build agents in Elixir now
catlover76 2 days ago [-]
[dead]
1 days ago [-]
kristopolous 1 days ago [-]
gleam is the best. go check it out.
debarshri 1 days ago [-]
I agree.
gyudin 1 days ago [-]
It is not. Human coding languages and paradigms revolve around solving problems related to issues that human struggle with. We need AI coding languages that are easy to read and verify by humans, but should solve problems that AI agents struggle with.
wwarner 1 days ago [-]
I mean why not cpp? With AI support it’s much easier to write a safe cpp17 program.
arthurcolle 1 days ago [-]
Erlang is a way better fit for a distributed agent orchestration layer. You have a ton of dependencies, over network and maybe in userspace, you have a lot of inter-operability and reliability constraints, you want to hotswap code and capabilities at runtime, without degrading the overall system performance. And you get networking/distribution/async message passing for free
I consider myself an expert in this relatively niche domain and welcome follow up, critiques, and even your most challenging problems. I love this area and I think distributed systems are coming back in a big way in this new era!
guywithahat 1 days ago [-]
If you don't mind me asking why would one use Erlang over Elixir?
arthurcolle 1 days ago [-]
It doesn't feel different enough to merit a difference. Elixir is just a set of Erlang macros that turn one syntax into .beam that otherwise wouldn't turn into .beam
Elixir is way more productive to write/deal with (Phoenix vs. Erlang templating) maybe if you're a web dev, but at the end of the day you're dealing with the same exact same underlying architecture. If you're a prolog programmer, Erlang will feel nicer than if you're a ruby programmer.
I have many packages published as Mix packages, and some published as rebar packages.
Overall, ergonomics definitely feel nicer with Elixir, but I feel like by having it be portrayed as "so different" from Erlang, people don't pull open the Erlang/OTP docs, and don't look at the dozens of behaviors that already exist that usually solve your problem.
Like, why is there a gen stage in Elixir but not in Erlang?
If you wanna use the BEAM, you can use it. If they were more in sync, and provided OOTB in the same distribution, I'd always lean towards Elixir.
Just feels weird that Elixir gets a bunch of street cred for what are fundamentally Erlang/OTP capabilities
oxidant 1 days ago [-]
Elixir has a better developer experience, or at least it's more approachable. Better code splitting with modules, easier to use variables (no var, var1, var2), loops that look like loops but easy enough to fall back to recursion, and an easier to read syntax.
gen_stage is just a library. One could write it in Erlang. It's like asking why Broadway is only for Elixir and not Erlang.
It was hard to approach the Erlang docs when I started in Elixir. However, they've moved to an ex_doc format (is it ex_docs?) as a standard and it's so much easier to grok.
arthurcolle 1 days ago [-]
Yeah, I didn't think of that at the time I initially posted, but that's very true - I think pipes are definitely a key advantage of Elixir.
I couldn't imagine trying to implement this DSPy library in Erlang, for example
>Just feels weird that Elixir gets a bunch of street cred for what are fundamentally Erlang/OTP capabilities
I know what you mean, at the same time I'm thinking we should welcome any momentum from the Elixir community. The more people working with Elixir/Erlang the better. And if you try Elixir at some point you learn about the Elixir background.
eru 1 days ago [-]
Erlang has better syntax than Elixir.
But otherwise they are mostly the same: Elixir is just an Erlang reskin.
So pretty much wherever you can use one, you can use the other.
Teifion 1 days ago [-]
An anecdote which may be of interest. Speaking to Elixir and Erlang developers I found those who started with Erlang preferred its syntax while those who started with Elixir or didn't know either preferred the Elixir syntax.
eru 21 hours ago [-]
I agree that either way, Elixir is mostly just a reskin.
I would have like it more, if they had reskinned it to look more like Haskell. But that's just my preference.
samuell 1 days ago [-]
While I think there is some truth to that regarding the programming paradigm, I always felt the EVM have two big drawbacks, compared to something like Go:
1. Requiring a VM, making deployment more complex.
2. Not being natively compiled, or always having this performance roof for the inner loops.
After considering both Erlang/Elixir and Go a lot for my scientific workflow manager, I finally went with Go for these exact reasons.
throwawaymaths 20 hours ago [-]
releases give you a tar.gz with everything bundled.
alienchow 19 hours ago [-]
The OG container.
hosh 1 days ago [-]
I came here to say that too.
It already does well coordinating IoT networks. It's probably one of the most underestimated systems.
The Elixir community has been working hard to be able to run models directly within BEAM, and recently, have added the capability for running Python directly.
From the article: """ Agents typically have a number of shared characteristics when they start to scale (read: have actual users):
"""No. 1 doesn't really point to one language over another, and all the rest show that execution speed and server-side efficiency is not very relevant. People ask agents a question and do something else while the agent works. If the agent takes a couple seconds longer because you've written it in Python, I doubt that anyone would care (in the majority of cases at least).
I'd argue Python is a better fit for agents, mostly because of the mountain of AI-related libraries and support that it has.
> Contrast this with Python: library developers need to think about asyncio, multithreading, multiprocessing, eventlet, gevent, and some other patterns...
Agents aren't that hard to make work, and you can get most of the use (and paying users) without optimizing every last thing. And besides, the mountain of support you have for whatever workflow you're building means that someone has probably already tried building at least part of what you're working on, so you don't have to go in blind.
(I think you can effectively write an agent in any language and I think Javascript is probably the most popular choice. Now, generating code, regardless of whether it's an agent or a CLI tool or a server --- there, I think Go and LLM have a particularly nice chemistry.)
Take for example something like being able to easily swap models, in Python it’s trivial with litellm. In niche languages you’re lucky to even have an official, well mantained SDK.
I agree that integration with the separate LLMs / agents can and does accelerate initial development. But once you write the integration tooling in your language of choice -- likely a few weeks worth of work -- then it will all come down to competing on good orchestration.
Your parent poster is right: languages like Erlang / Elixir or Golang (or maybe Rust as well) are better-equipped.
I've found that TypeScript is an excellent glue language for all kinds of AI. Python, followed by TS enjoy broad library support from vendors. I personally prefer it over Python because the type system is much more expressive and mature. Python is rapidly improving though.
> It turns out, cancelling long-running work in Node.js and Python is incredibly difficult for multiple reasons:
Evidence is lacking for this claim. Almost all tools out there support cancellations, and they're mostly either Python or JS.
You can safely swap out agents without redeploying the application, the concurrency is way below the scale BEAM was built for, and creating stateful or ephemeral agents is incredibly easy.
My plan is to set up a base agent in Python, Typescript, and Rust using MCP servers to allow users to write more complex agents in their preferred programming language too.
[0]: https://github.com/extism/extism [1]: https://github.com/extism/elixir-sdk
https://www.erlang.org/doc/apps/mnesia/mnesia.html
In my experience the performance of the language runtime rarely matters.
If there ever was a language feature that matters for agent performance and scale, it's actually the performance of JSON serialization and deserialization.
"arguably".
Typescript is just a thin wrapper over javascript who doesnt have these types at all.
AGDTs, mapped types, conditional types, template literal types, partial higher-kinded types, and real inference on top of all that.
It had one of the most fully loaded type systems out there while the Go team was asking for community examples of where generics might be useful because they're not sure it might be worth it.
These platforms store an event history of the functions which have run as part of the same workflow, and automatically replay those when your function gets interrupted.
I imagine synchronizing memory contents at the language level would be much more overhead than synchronizing at the output level.
https://fly.io/blog/the-exit-interview-jp/
The reason I started working on Hatchet was because I'm a huge advocate of durable execution, but didn't enjoy using Temporal. So we try to make the development experience as good as possible.
On the underlying durable execution layer, it's the exact same core feature set.
Adding a durable boundary (via a task queue) in between steps is typically the first step, because you at least get persistence and retries, and for a lot of apps that's enough. It's usually where we recommend people start with Hatchet, since it's just a matter of adding a simple wrapper or declaration on top of the existing code.
Durable execution is often the third evolution of your system (after the first pass with no durability, then adding a durable boundary).
It must in my view at least, as that's how Oban (https://github.com/oban-bg/oban) in Elixir models this kind of problem. Full disclosure, I'm an author and maintainer of the project.
It's Elixir specific, but this article emphasizes the importance of async task persistence: https://oban.pro/articles/oban-starts-where-tasks-end
Through the use of both a map that holds a context tree and a database we can purge old sessions and then reconstruct them from the database when needed (for instance an async agent session with user input required).
We also don't have to hold individual objects for the agents/workflows/tools we just make them stateless in a map and can refernce the pointers through an id as needed. Then we have a stateful object that holds the previous actions/steps/"context".
To make sure the agents/workflows are consistent we can hash the output agent/workflow (as these are serializable in my system)
I have only implemented basic Agent/tools though and the logging/reconstruction/cancellation logic has not actually been done yet.
Edit: Heh, I noticed after writing this that some sibling comments also mention Temporal.
The upside is that agent subtasks can be load balanced among servers, tasks won't be dropped if the process is killed, and better observability comes along with it.
The downside is definitely complexity. I'm having a hard time planning out an architecture that doesn't significantly increase the complexity of my agent code.
This is true regardless of the language. I always do a reasonable amount of work (milliseconds to up to a few seconds) worth of work in a Go routine every time. Anything more and your web service is not as stateless as it should be.
The death knell for variety in AI languages was when Google rug-pulled TensorFlow for Swift.
-Someone who has written a ton of JS over the past... almost 30 years now.
Yes, today’s ML engineer has practically no choice but to use Python, in a variety of settings, if they want to be able to work with others, access the labor market without it being an uphill battle, and most especially if they want to study AI / ML at a university.
But there were also the choices to initially build out that ecosystem in Python and to always teach AI / ML in Python. They made sense logistically, since universities largely only teach Python, so it was a lowest-common-denominator language that allowed the universities to give AI / ML research opportunities to everyone, with absolutely no gatekeeping and with a steadfast spirit of friendly inclusion (sorry, couldn’t resist the sarcastic tangent). I can’t blame them for working with what they had.
But now that the techniques have grown up and graduated to form multibillion-dollar companies, I’m hopeful that industry will take up the mantle to develop an ecosystem that’s better suited for production and for modern software engineering.
“Like the CPU package, the module is accelerated by the TensorFlow C binary. But the GPU package runs tensor operations on the GPU with CUDA.”
They note that these operations are synchronous, so using them will sacrifice some of JavaScript’s effectiveness at asynchronous event processing. This is not different from Python when you are training or serving a model. JavaScript’s strengths would shine brighter when coordinating agents / building systems that coordinate models.
JS is a terrible language to begin with, and bringing it to the backend was a mistake. TS doesn’t change the fact that the underlying language is still a pile of crap.
So, like many, I’ll write anything—Go, Rust, Python, Ruby, Elixir, F#—before touching JS or TS with a ten-foot pole.
It's 2025, Node.js has been around since 2009, yet these languages' still use C-based interpreters by default, and their non-standard JIT alternatives are still much worse than V8.
Use whatever language works well for you and the task at hand, but many enjoy fullstack JS/TS.
Sure, the libs are mostly written in C/C++, but all of them have first-class support for Python and oftentimes Python only. Serving the model is a different story and you can use whatever language to do so.
As someone who has worked in the DS realm for an extended period of time, I can tell you Python has practically zero competition when it comes to data wrangling and training models. There are plenty of contenders when it comes to serving the models or building “agents.”
As for type checking, yeah, it sucks big time. TS is a much better type system than the bolted-on hints in Python. But it’s still JS at the end of the day. All the power of V8, a zillion other runtimes, and TS gets marred by a terribly designed language.
Use whatever you like.
Pouchdb. Hypercore (pear). It’s nice to be able to spin up JS versions of things and have them “just work” in the most widely deployed platform in the world.
TensorflowJS was awesome for years, with things like blazeface, readyplayer me avatars and hallway tile and other models working in realtime at the edge. Before chatgpt was even conceived. What’s your solution, transpile Go into wasm?
Agents can work in people’s browsers as well as node.js around the world. Being inside a browser gives a great sandbox, and it’s private on the person’s own machine too.
This was possible years ago: https://www.youtube.com/watch?v=CpSzT_c7_UI&t=10m30s
I do my best to run as little in the browser as possible. Everything is an order of magnitude simpler and faster to build if you do the bulk of things on a server in a language of your choice and render to the browser as necessary.
Some of the things are just more natural in python being a dynamic language. Eg decorator to quickly convert methods into tool calls, iterating over tool functions to create list of tools, packages to quickly convert them into json schema etc.
Consuming many incoming triggers, eg from user input as well as incoming emails from gmail, or messages from slack which would trigger new agent run was lot more natural in go with channels and switch for loop vs in python where had to create many queues and threading etc
On the other hand, the programming languages used by LLM people seem to be python and javascript mainly.
So while I argue that they all should really move on to modern languages, I think go is still better than the I-can't-even-install-this mess of python and javascript imports without even a Dockerfile that seem to be so prevalent in LLM projects.
Honest question, I am genuinely interested in what cannot be done easily or at all due to limitations of the Go type system.
You can find many articles on the internet about it, but in my experience I would summarize it in:
It looks like it's made to have a simple compiler, not to simplify the programmer's life.
Initially its simplicity is wonderful. Then you start to notice how verbose things are. Channels are another looks-nice-but-maybe-don't feature. nil vs nil-interface. Lack of proper enums is hurting so much I can't describe it. I personally hate automatic type conversions, and there are so many inconsistencies in the standard and most used libraries that you really start to wonder why some things where even done. validators that validate nothing, half-done tagging systems for structs, tons of similar-but-not-quite interfaces and methods.
It's like the language has learning wheels that you can't shake off or work around. You end up wanting to leave for a better one.
People had to beg for years for basic generics and small features. If google is not interested in it, you'd better not be interested in it and it shows after a while.
Companies started to use it as an alternative to C and C++, while in reality it's an alternative to python. Just like in python a lot of the work and warnings are tied into the linter as a clear workaround. Our linter config has something like 70+ linters classes enabled, and we are a very small team.
C can be described as a relatively simple language (with caveats), C++ has grown to a blob that does and has everything, and while they have lots of footguns I did not find the same level of frustration as with go. You always end up fighting a lot of corner cases everywhere.
Wanted to say even more, but I think I ranted enough.
Do you mean sum types? That is not a case of them not being "proper", though. They simply do not exist as a feature at all.
Go's enums function pretty much like enums in every single other language under the sun. If anything, Go enums are more advanced than most languages, allowing things like bit shifts. But at the heart of it all, it's all just the same. Here are enum implementations in both Go and Rust:
[Go] https://github.com/golang/go/blob/f18d046568496dd331657df4ba...
[Rust] https://github.com/rust-lang/rust/blob/40daf23eeb711dadf140b...
While Go leans on the enum value produced by `range` to act as the language's enumerator, while Rust performs explicit incrementing to produce the enumerator, the outcome is no different — effectively nothing more than [n=0, n++]. Which stands to reason as that's literally, as echoed by the dictionary, what an enum is.
Yes, you can emulate this style of enums by using iota to start a self-incrementing list of integer constants. But that's not what any language (except for C) has ever meant by "enum".
Enums are generally assumed to be type-safe and namespaced. But in Go, they are neither:
There is no namespacing, no way to — well — enumerate all the members of the enum, no way to convert the enum value to or from a string (without code-genreation tools like stringer), and the worst "feature" of all is that enums are just integers that can freely receive incorrect values.If you want to admire a cool hack that you can show off to your friends, then yeah, iota is a pretty neat trick. But as a language feature it's just a ugly and awkward footgun. Being able to auto-increment powers of two is a very small consolation prize for all of that (and something you can easily achieve in Rust anyway with any[1] number[2] of crates[3]).
[1] https://crates.io/crates/enumflags2
[2] https://crates.io/crates/bitmask-enum
[3] https://crates.io/crates/modular-bitfield
Sure, but now you're getting into the topic of types. Enums produce values. Besides, Go isn't really even intended to be a statically-typed language in the first place. It was explicitly told when it was released that they wanted it to be like a dynamically-typed language, but with statically-typed performance.
If you want to have an honest conversation, what other dynamically-typed languages support type-safe "enums"?
> But that's not what any language (except for C) has ever meant by "enum".
Except all the others. Why would a enum when used when looping over an array have a completely different definition? It wouldn't, of course. Enums are called what they are in a language because they actually use enums in the implementation, as highlighted in both the Go and Rust codebases above.
Many languages couple enums with sum types to greater effect, but certainly not all. C is one, but even Typescript, arguably the most type-intensive language in common use, also went with "raw" enums like Go.
Even without sum types, there is a common pattern of defining a new type and const-defining the possible values that is a clear workaround on the lack of an 'enum' keyword.
Maybe because the compiler can't be sure that those const values are all the possible values of the type, we can't have things like enforcing exhaustive switches on this "enum", and that is left to the linter at best.
Default-zero initialization is always valid too, which can leave you with an "enum" value that is not present in the const definitions (not everything starts on iota, iota does not mean 0).
It's a hack, it became a pattern. It still is not a proper (or even basic) enum even without sum types.
It is to the extent that it helps explain what an enum is, and why we call the language feature what we do. Python makes this even more apparent as you explicitly have to call out that you want the enum instead of it always being there like in Go:
In case I'm not being clear, an array enumerator like in the above code is not the same as a language enumerator, but an array enumerator (or something similar in concept) is how language enumerators are implemented. That is why language enumerators got the name they did.> It still is not a proper (or even basic) enum even without sum types.
It most certainly is "proper". In fact, you could argue that most other languages are the ones that are lacking. Go's enums support things like bit shifts, which is unusual in other languages. Perhaps it is those other languages that aren't "proper"?
But, to be sure, it's not sum types. That is certain. If you want sum types you are going to have to look elsewhere. Go made it quite clear from the beginning that it wanted to be a "dynamically-typed language with statically-typed performance", accepting minimal static type capability in order to support the performance need.
There is definitely a place for languages with more advanced type systems, but there are already plenty of them! Many considerably older than Go. Haskell has decades on Go. Go was decidedly created to fill in the niche of "Python, but faster", which wasn't well served at the time. Creating another Haskell would have been silly and pointless; but another addition to the long list of obscure languages serving no purpose.
I thought the main "let's migrate our codebase to Go" crowd had always been from the Java folks, especially the enterprise ones. Any C/C++ code that is performant is about to get a hit, albeit small, from migrating to a GC-based runtime like Go, so I'd think that could be a put off for any critical realtime stuff - where Rust can be a much better target. And, true for both C++ and Java codebases, they also might have to undergo (sic) a major redux at the type/class level.
But yes, the Googlers behind Go were frustrated by C++ compile times, tooling warts, the 0x standard proposal and concurrency control issues - and that was primal for them, as they wanted to write network-server software that was tidy and fast [1]. Java was a secondary (but important) huge beast they wanted to tackle internally, IIRC. Java was then the primary language Googlers were using on the server... Today apparently most of their cloud stuff is written in Go.
[1] https://evrone.com/blog/rob-pike-interview
There's a lot of software out there that either was written before good modern options existed, or uses very outdated patterns, or its language wasn't chosen with much thought.
If you are interested in the merits of golang, you should listen to someone who uses it.
There's a lot of discussions on the internet about the bad design decisions of Golang (for example around channels, enums, error handling, redeclarations, interfaces, zero values, nilability... at least generics aren't so much a subject anymore)
While you could try to argue that dynamically-typed languages in general are a poor fit for everything, the reality is that people there are typically using Python instead – and the best alternative suggestions beyond Go are Erlang and Elixer, which are also dynamically typed, so that idea doesn't work. Dynamic typing is what clearly fits the problem domain.
But despite all of that the language has some really good qualities too- interfaces works far better than it feels like they should; the packaging fits together really well (I'm learning Rust right now and the file structure is far more complicated); and people are able to write a lot of linters/codegen tools BECAUSE the language is so simple.
All in all I worry the least about the long term maintenance cost of my Go code, especially compared to my Python or JS code.
There isn't much more you can do with them. Literally all an enum can produce is a number.
In increasingly common use in a number of languages, enums are being coupled with discriminated unions, using the enum value as the discriminant. This is probably what you're really thinking of, noticing Go's lack of unions.
But where you might use a discriminated union if it were available, you would currently use an interface, where the type name, rather than an enum, is what differentiates the different types. If Go were to gain something like a discriminated union, it is likely it would want to extend upon that idea. Most especially given that generics already introduced syntax that would lend itself fairly well to it:
Where enums are used to act as a discriminant in other languages, that is largely just an implementation detail that nobody really cares about. In fact, since you mentioned Rust, there are only 8,000 results on Github for `std::mem::discriminant` in Rust code, which is basically nothing. That is quite indicative that wanting to use an enum (in Rust, at least) is a rare edge case. Funny, given how concerned Rust users seem to be about enums. Talk is cheap, I suppose.* = There is a caveat here, which is that the sealed set of variants can differ between compile-time (what's in a .java file) and runtime (what's in a .class file) but this only happens when you mismatch your dependency versions. Rather importantly, the resolution of enum variants by the classloader is based on their name and not their ordinal, so even if the runtime class differs from the compile-time source, Day.MONDAY will never be turned into a differently named variant.
Then I am not sure how you think it is an enum? What defines an enum, literally by dictionary definition, is numbering.
It is hilarious to me that when enum is used in the context of looping over an array, everyone understands that it represents the index of the element. But when it comes to an enum in a language, all of a sudden some start to think it is magically something else? But the whole reason it is called an enum is because it is produced by the order index of an AST/other intermediate representation node. The very same thing!
While I haven't looked closely at how Java implements the feature of which you speak, I'd be surprised if it isn't more or less the same as how Rust does it under the hood. As in using a union with an enumerator producing the discriminant. In this case it would be a tag-only union, but that distinction is of little consequence for the purposes of this discussion. That there is an `ordinal` method pretty much confirms that suspicion (and defies your claim).
> you cannot represent Day(7) or Day(-1) in any way, shape, or form.
While that is true, that's a feature of the type system. This is a half-assed attempt at sum types. Enums, on the other hand, are values. An enum is conceptually the same as you manually typing 1, 2, 3, ... as constants, except the compiler generates the numbers for you automatically, which is what is happening in your example. The enum is returned by `ordinal`, like you said. Same as calling std::mem::discriminant in Rust like we already discussed in a sibling thread.
You seem to be trivializing the type system. This property is not imagined solely by the compiler, it is carried through the language and runtime and cannot be violated (outside of bugs or unsafe code). Go has nothing like this.
If you choose to call this "not an enum", that is certainly your idiosyncratic prerogative, but that doesn't make for very interesting discussion. Even though I agree that discriminated unions aren't enums and am somewhat annoyed by Rust's overloading of the term, this is not that.
It strongly suggests that implementation is a discriminated union, just like Rust. Again, it is tag-only in this case, where Rust allows also attaching payload, but that's still a type of discriminated union. That it is a set of integers – contrary to the claim made earlier – combined with you explaining how the compiler treats it like a discriminated union — as in that there are type checks against the union state, that does reveal that it couldn't be anything other than a discriminated union that is effectively identical to what we find in Rust, along with many other languages these days.
> It can be (and is) simply a field on each Day object, not an index into anything
So...? Enum is not staunch in exactly where the number comes from; it simply needs to number something. Indices are convenient, though, and I am not sure why you would use anything else? That doesn't necessarily mean the index will start where you think it should, of course.
For example,
In some languages, the indices might "reset" for each enum [A=0, B=1, C=2, X=0, Y=1, Z=2], while in other languages it might "count from the top" [A=0, B=1, C=2, X=3, Y=4, Z=5]. But, meaningless differences aside, where else is the number going to come from? Using a random number generator would be silly.But, humour us, how does Java produce its enums and why doesn't it use indices for that? Moreover, why did they choose to use the word `ordinal` for the method name when that literally expresses that it is the positional index?
Now, it really seems to be in the weeds of pedantry when you start talking about discriminated unions that have only discriminants and no payload. Taking from your examples, the key point is that a Foo is not a Bar and is also not an int. Regardless of whether the variants are distinct or overlapping in their ordinals, they are not interchangeable with each other or with machine-sized integers.
Yes, this echos what I stated earlier: "An enum is conceptually the same as you manually typing 1, 2, 3, ... as constants, except the compiler generates the numbers for you automatically" Nice to see that your understanding is growing.
> Taking from your examples, the key point is that a Foo is not a Bar.
I'm not sure that's a useful point. Nobody thinks
...are treated as being the same in Java, or, well any language that allows defining types of that nature. That is even the case in Go! But what is significant to the discussion about enums is the value that drives the inner union of the class. As in, the numbers that rest beneath SUNDAY, MONDAY, TUESDAY, etc. That's the enum portion.The values of Day are {SUNDAY, ..., SATURDAY} not {0, ..., 6}. We can, of course, establish a 1:1 mapping between those two sets, and the API provides a convenient forward mapping through the ordinal method and a somewhat less convenient reverse mapping through the values static method. However, at runtime, instances of Day are pointers not numbers, and ints outside the range [0, 6] will never be returned by the ordinal method and will cause IndexOutOfBoundsException if used like Day.values()[ordinal].
Tying back to purpose of this thread, Go cannot deliver the same guarantee. Even if we define
then we can always construct Day(-1) or Day(7) and we must consider them in a switch statement. It is also trivial to cast to another "enum" type in Go, even if the variant doesn't exist on the other side. This sealed, nonconvertible nature of Java enums makes them "true" enums, which you can call tag-only discriminated unions or whatever if you want, but no such thing exists in Go. In fact, it is not even possible to directly adapt the Java approach, since sealed types of any kind, including structs, are impossible thanks to new(T) being allowed for all types T.It is no secret that Go has a limited type system. In fact, upon release it was explicitly stated that their goal was for it to be a "dynamically-typed language with statically-typed performance", meaning that what limited type system it does have there only to support the performance goals. You'd have to be completely out to lunch while also living under a rock to think that Go has "advanced" types.
But, as before, enums are values. It is not clear why you want to keep going back to talking about type systems. That is an entirely different subject. It may be an interesting one, but off-topic as it pertains to this discussion specifically about enums, and especially not useful when talking in the context of Go which it isn't really intended to be a statically-typed language in the first place.
enum Discriminant {
Disc0 = 0,
Disc1 = 1,
…
}
And because it has been used like that in C for decades, the dictionary definition takes a backseat to the now de-facto C-based definition (at least for popular systems languages, which Rust is trying to share as much syntax with).
Meaning the keyword? Sure, C has the same inconsistency if you disable the enumerator with manual constant values. C is not exactly the paragon of thoughtful design. But whataboutism is a dumb path to go down.
> the dictionary definition takes a backseat to the now de-facto C-based definition
That's clearly not the case, though, as the functionality offered by the Rust enum keyword is very different. It puts absolutely no effort into being anything like C. Instead, it uses enum as the keyword for defining sum types. The C enum keyword, on the other hand, does nothing but define constants, and is functionally identical to what Go has. There is an enum involved in both cases, as demonstrated earlier, so the terminology isn't strictly wrong (in the usual case) but the reason for it existing shares little commonality.
But maybe you've moved onto the concept of enums rather than syntax and I didn't notice? You are right that the dictionary definition is in line with the intent of the C keyword, which speaks to the implementation, and is how C, Rust, Go, and every other language out there use the terminology. In another comment I even linked to the implementation in both Go and Rust and you can see that the implementation is conceptually the same in both cases: https://news.ycombinator.com/item?id=44236666
https://techblog.steelseries.com/golisp/index.html
https://github.com/SteelSeries/golisp
I wonder if they still use it.
But outside of that - ML in go is basically impossible. Trying to integrate with the outside ecosystem of go is really difficult - and my experience has been that Claude Code is far less effective with Go then it is with Python, or even Swift.
I ditched a project I was writing in Go and replaced it with Swift (this was mostly prompt based anyways). It was remarkably how much better the first pass of the code generation was.
Frankly, anything that has a compiler and supports doing asynchronous stuff decently probably does the job. Which of course describes a wide range of languages. And since agents inherently involve a lot (some would say mostly) prompt engineering, it helps if the language is good at things like multi line strings, templated strings, and just generally manipulating strings.
As for the async stuff, it's nice if a language can do async things. But is that enough? Agentic systems essentially reach out to other systems over the network. Some of the tasks may be long lived. Minutes, hours, or even days. A lot can happen in such long time. IMHO the model of some system keeping all that state in a long running process is probably not ideal. We might want something more robust and long running and less dependent on a some stateful process running somewhere for days on end.
There is an argument to be made for externalizing related state from the language and maybe using some middleware optimized for this sort of thing. I've seen a few things that go in that direction but not a lot yet. It seems that people are still busy reinventing wheels and not fully realizing yet that a lot of those wheels don't need reinventing. There's a lot of middleware out there that is really great at async job scheduling, processing, fan out, and all the other stuff that people eventually will figure out is needed here.
1. If you make your agents/workflows serializable you can run/load them from a config file or add/remove them from a decoupled frontend. You can also hash them to make versioning easy to track/immutable.
2. If you decouple the stateful object from the agent/workflow object you can just store that through sufficient logging then you can rebuild any flow at any state and have branching by allowing traces to build on one another. You can also restart/rerun a flow starting at any location.
3. You can allow for serializable tools by having a standard HttpRequestTool then setup cloudflare workers/any external endpoints for the actual toolcall logic. Removing primary server load and making it possible to add/remove tools without rebuilding/restarting.
Given this system in golang you can have a single server which supports tens of thousands of concurrent agent workflows.
The biggest problem is there isn't that many people who are working on it. So even if you can make agents 100x more efficient by running in Go it doesn't really matter if cost isn't the biggest factor for the final implementations.
The actual compute/server/running costs for big AI Agent implementation contracts is <1%, so making it 100x more efficient doesn't really matter.
Along these lines, I'm building Dive:
https://github.com/diveagents/dive
When building a SaaS with a Go backend, it's nice to be able to have the option of the agents and workflows being in the same process. And being confident in the ability of that to scale well.
While it's true that Go lacks good ML libraries, for some this isn't too consequential if your app is primarily using Anthropic or OpenAI and a database that offers semantic or hybrid search for RAG. The ML is done elsewhere. Plus it could be that you can leverage MCP servers and at that point you're language agnostic.
Regarding the concurrency model approach with Go and agents, I initially baked a message based approach (a la the Actor model, with one goroutine per agent) into Dive Agents, but eventually found that this would be better implemented as another layer. So currently in Dive it's the user's choice on how to implement concurrency and whether to use messaging. But I anticipate building that back in as an optional layer.
> Share memory by communicating
> Centralized cancellation mechanism with context.Context
> Expansive standard library
> Profiling
> Bonus: LLMs are good at writing Go code
I think profiling is probably the lowest value good here, but would be willing to hear out stories of AI middleware applications that found value in that.
Cancelling tasks is probably the highest value good here, but I think the contending runtimes (TS/Python) all prefer using 3P libraries to handle this kind of stuff, so probably not the biggest deal.
Being able to write good Go code is pretty cool though; I don't write enough to make a judgement there.
Good at writing bad code. But most of the code in the wild is written by mid-level devs, without guidance and on short timelines.. i.e bad code. But this is a problem with all languages, not just Go.
The language of agents doesn't matter much in the long run as it's just a thin shell of tool definitions and API calls to the backing LLM.
You need a DSL either supported in the language or through configuration. These are features you get for free in python and secondly JavaScript. You have to write most of this yourself in go
I think I'd condense this out to "this is not a really important deciding factor in what language you choose for your agent". If you know you need something you can only get in Python, you'll write the agent in Python.
So every discussion about the "best" programming language is really you telling the world about your favorite language.
Use Go. Use Python. Use JavaScript. Use whatever the hell else you want. They are all good enough for the job. If you are held back it won't be because of the language itself.
But programming languages make tradeoffs on those very paths (particularly spawning child processes and communicating with them, how underlying memory is accessed and modified, garbage collection).
Agents often involve a specific architecture that's useful for a language with powerful concurrency features. These features differentiate the language as you hit scale.
Not every language is equally suited to every task.
This fits LLMs pretty well too it seems!
The issue with Go, is as soon as you need to do actual machine learning it falls down.
The issue with Python is that you often want concurrency in agents. Although this may be solved with Pythons new threading.
Why is Rust great? It interops very well with Python, so you can write any concurrent pieces into that and simply import it into py, without needing to sacrifice any ML work.
I'll be honest Go is a bit of an odd fit in the world of AI, and if thats the future I'm not sure Go has a big part to play outside of some infra stuff.
LLM researchers care about neither since Rust comes with its own headache: learning curve, slow compilation, weak stdlib, and Go’s FFI story is just sad. It’s still Python or GTFO.
That said, Go is great to whip up “agents” since it’s a nicer language to write networking and glue code, which is what agents are. Other than a few niche groups, I’ve seen a lot more agents written in Go than in Rust.
Agents that don’t do machine learning rarely ever work, that’s the sad truth of the ecosystem.
Dive orchestrates multi agent workflows in Go. Take a look and let me know what you think.
Beneath all the jargon, it’s good to remember that an “agent” is ultimately just a bunch of http requests and streams that need to be coordinated—some serially and some concurrently. And while that sounds pretty simple at a high level, there are many subtle details to pay attention to if you want to make this kind of system robust and scalable. Timeouts, retries, cancellation, error handling, thread pools, thread safety, and so on.
This stuff is Go’s bread and butter. It’s exactly what it was designed for. It’s not going to get you an MVP quite as fast as node or python, but as the codebase grows and edge cases accumulate, the advantages of Go become more and more noticeable.
1 - https://github.com/plandex-ai/plandex
pretty please :P
we all yearn for a good static language, and most of us would kill for "something like Rust (good type system, syntax, tools) but without ownership / linear-typing - just a good GC, all-on-the-heap and a dash of nice immutable datastructs"...
by that logic Elixir is even better for agents.
also the link at the bottom of the page is pretty much why I ditched Go: https://go.dev/blog/error-syntax
The AI landscape moves so fast, and this conservative, backwards looking mindset of the new Go dev team doesn't match the forward looking LLM engineering mindset.
Elixir's lightweight processes and distribution story make it ideal for orchestration, and that includes orchestrating LLMs.
Shameless plug, but that's what many people have been using Oban Pro's Workflows for recently, and something we demonstrated in our "Cascading Workflows" article: https://oban.pro/articles/weaving-stories-with-cascading-wor...
Unlike hatchet, it actually runs locally, in your own application as well.
Erlang possibly even more so. The argument that pure code is generally safer to vibe code is compelling to me. (Elixir's purity is rather complicated to describe, Erlang's much more obvious and clear.) It's easier to analyze that this bit of code doesn't reach out and break something else along the way.
Though it would be nice to have a language popular enough for the LLMs to work well on, that was pure, but that was also fast. At the moment writing in pure code means taking a fairly substantial performance hit, and I'm not talking about the O(n log n) algorithm slowdowns, I mean just normal performance.
Funnily, it's also one of the reasons I stay with Go.
Error handling is the most contraversial Go topic, with half the people saying it's terrible and needs a new syntax, and half saying it's perfect and adding any more syntax will ruin it.
https://github.com/arthurcolle/agents.erl
I consider myself an expert in this relatively niche domain and welcome follow up, critiques, and even your most challenging problems. I love this area and I think distributed systems are coming back in a big way in this new era!
Elixir is way more productive to write/deal with (Phoenix vs. Erlang templating) maybe if you're a web dev, but at the end of the day you're dealing with the same exact same underlying architecture. If you're a prolog programmer, Erlang will feel nicer than if you're a ruby programmer.
I have many packages published as Mix packages, and some published as rebar packages.
Overall, ergonomics definitely feel nicer with Elixir, but I feel like by having it be portrayed as "so different" from Erlang, people don't pull open the Erlang/OTP docs, and don't look at the dozens of behaviors that already exist that usually solve your problem.
Like, why is there a gen stage in Elixir but not in Erlang?
If you wanna use the BEAM, you can use it. If they were more in sync, and provided OOTB in the same distribution, I'd always lean towards Elixir.
Just feels weird that Elixir gets a bunch of street cred for what are fundamentally Erlang/OTP capabilities
gen_stage is just a library. One could write it in Erlang. It's like asking why Broadway is only for Elixir and not Erlang.
It was hard to approach the Erlang docs when I started in Elixir. However, they've moved to an ex_doc format (is it ex_docs?) as a standard and it's so much easier to grok.
I couldn't imagine trying to implement this DSPy library in Erlang, for example
https://hexdocs.pm/dspy/0.1.0/api-reference.html
I know what you mean, at the same time I'm thinking we should welcome any momentum from the Elixir community. The more people working with Elixir/Erlang the better. And if you try Elixir at some point you learn about the Elixir background.
But otherwise they are mostly the same: Elixir is just an Erlang reskin.
So pretty much wherever you can use one, you can use the other.
I would have like it more, if they had reskinned it to look more like Haskell. But that's just my preference.
1. Requiring a VM, making deployment more complex.
2. Not being natively compiled, or always having this performance roof for the inner loops.
After considering both Erlang/Elixir and Go a lot for my scientific workflow manager, I finally went with Go for these exact reasons.
It already does well coordinating IoT networks. It's probably one of the most underestimated systems.
The Elixir community has been working hard to be able to run models directly within BEAM, and recently, have added the capability for running Python directly.
What are they doing with Python on the BEAM these days? I'm OOTL