But then it has other weird features too, like they seem to be really emphasising "friendliness" (great!) but then it has weird syntax like `\` for anonymous functions (I dunno where that dumb syntax came from by Nix also uses it and it's pretty awful). Omitting brackets and commas for function calls is also a bad decision if you care about friendliness. I have yet to find a language where that doesn't make the code harder to read and understand.
Skinney 10 hours ago [-]
The syntax came from Elm, which got it’s syntax from Haskell (where Nix also got it from) which got its syntax from ML.
It’s a syntax that’s several decades old at this point.
It’s different, but not harder. If you learned ML first, you’d found Algol/C-like syntax equally strange.
wk_end 10 hours ago [-]
(ETA: speaking strictly about anonymous functions; on rereading you might be talking about the absence of parens and commas for function application.)
That's not ML syntax. Haskell got it from Miranda, I guess?
In SML you use the `fn` keyword to create an anonymous function; in Ocaml, it's `fun` instead.
throwaway17_17 10 hours ago [-]
I believe the `\` character for functions is original to Haskell. Miranda does not have anonymous functions as a part of the language.
pezezin 4 hours ago [-]
The \ is a simplified lambda, because most programmers can't type λ easily.
needlesslygrim 6 hours ago [-]
Well, ML (or at least the first versions of it) used a λx • x syntax [1] for λ-abstractions, the same (excluding the use of • over .) notation as used with the Lambda Calculus, and I've always assumed \ was an ASCII stand in.
That paper isn't showing real ML syntax itself; it's a mathematical presentation to demonstrate how the type system algorithm works. The actual original LCF/ML syntax would differ. I don't believe it used an actual lambda character, although for the life of me I can't find any evidence one way or another, not even in the LCF source code (https://github.com/theoremprover-museum/LCF77)
But yes, the slash is just an ASCII stand-in for a lambda.
ETA: I tracked down a copy of the Edinburgh LCF text and I have to eat crow. It doesn't use a lambda, but it does use a slash rather than a reserved word. The syntax, per page 22, is in fact, `\x. e`. Similar to Haskell's, but with a dot instead of an arrow.
oh wow it went from being a very clear language to looking more like a hodgepodge of a few different languages.
hajile 6 hours ago [-]
Pretty much all of those changes look bad to me.
masijo 8 hours ago [-]
Jesus, why? This is a bummer.
7 hours ago [-]
mrkeen 10 hours ago [-]
It's the closest you get to 'λ' on a US keyboard.
fuzztester 8 hours ago [-]
IIRC, Richard explain that in one of his videos about Roc. I have seen at least a handful of them.
LAC-Tech 9 hours ago [-]
I believe Haskell uses it as well.
nemo1618 8 hours ago [-]
my hot take: the language should accept \, but formatters should replace it with λ
hajile 5 hours ago [-]
I feel that he got a lot of pressure from the FP community and wrote a bunch of nonsense instead of being straightforward with them.
The only relevant reason he lists is point-free, but he doesn't go far enough. Point-free very often turns into write-only balls of unmaintainable nastiness. Wanting to discourage this behavior is a perfectly reasonable position. Unfortunately, this one true argument is given the most tepid treatment of all the reasons.
Everything else doesn't hold water.
As he knows way better than most, Elm has auto-curry and has been the inspiration for several other languages getting better error messages.
Any language with higher-order functions can give a function as a result and if you haven't read the docs or checked the type, you won't expect it. He left higher-order function in, so even he doesn't really believe this complaint.
The argument about currying and pipe isn't really true. The pipe is static syntax known to the compiler at compile time. You could just decide that the left argument is applied/curried to the function before the right argument.
I particularly hate the learning curve argument. Lots of great and necessary things are hard to learn. The only question is a value judgement about if the learning is worth the payoff. I'd guess that most of the Roc users already learned about currying with a more popular FP language before every looking at Roc, so I don't think this argument really applies here (though I wouldn't really care if he still believed it wasn't worth the learning payoff for the fraction of remaining users).
To reiterate, I agree with his conclusion to exclude currying, but I wish he were more straightforward with his one good answer that would tick off a lot of FP users rather than resorting to a ton of strawman arguments.
msla 5 hours ago [-]
> `\` for anonymous functions
A one-character ASCII rendering of the Greek lowercase letter lambda: λ
λx → x + 5
\x -> x + 5
LAC-Tech 6 hours ago [-]
Opposite effect one, lost interest in Roc after reading that.
If anything I don't think Haskell goes far enough the automatic currying, points free stuff. If you're going to be declarative, don't half ass it.
xiaodai 10 hours ago [-]
R has that too.
Iwan-Zotow 6 hours ago [-]
really? thought it just function(x)
zoogeny 10 hours ago [-]
It's nice to see Zig continuing to gain support. I have no idea why I've ended up siding with Zig in the lower languages wars.
I used to root for the D programming language but it seems to have got stuck and never gained a good ecosystem. I've disliked Rust from the first time I saw it and have never warmed up to it's particular trade offs. C feels unergonomic these days and C++ is overfull with complexity. Zig feels like a nice pragmatic middle ground.
I actually think Rust is probably perfectly suited to a number of tasks, but I feel I would default to choosing Zig unless I was certain beyond doubt that I needed specific Rust safety features.
bornfreddy 10 hours ago [-]
I wanted to like Rust (memory safety and performance - what's not to like?) but both of my experiments with it ended in frustration. It seemed a way too complex language for what I needed.
Recently, a coworker of mine made a great observation that made everything clear to me. I was looking for a replacement for C, but Rust is actually a replacement for C++. Totally different beast - powerful but complex. I need to see if Zig is any closer to C in spirit.
pcwalton 10 hours ago [-]
> I was looking for a replacement for C, but Rust is actually a replacement for C++. Totally different beast - powerful but complex.
I've seen this sentiment a lot, and I have to say it's always puzzled me. The difference between Rust and basically any other popular language is that the former has memory safety without GC†. The difference between C++ and C is that the former is a large multi-paradigm language, while the latter is a minimalist language. These are completely different axes.
There is no corresponding popular replacement for C that's more minimalist than Rust and memory safe.
† Reference counting is a form of garbage collection.
munificent 8 hours ago [-]
Board games and craft beers are utterly unrelated objects who have commonality on essentially no axes (except sitting on tables, I guess). And, yet, if you like one, there's a very good chance you like the other.
I think that's where the sentiment comes from. It's not that Rust is similar to C++ in terms of the actual languages and their features. It's that people who like C++ are morely likely to like Rust than people who like C are.
I would argue that C is not a minimalistic language either. There is a lot under the hood in C. But it feels small in a way that Rust and C++ don't.
I think Rust and C++ appeal to programmers who are OK with a large investment in wrapping their head around a big complex language with the expectation that they will be able to amortize that investment by being very productive in large projects over a large period of time. Maybe sometimes the language feels like trying to keep a piece of heavy duty machinery from killing you, but they're willing to wrestle with it for the power you get in return.
The people who are excited about Zig and C wants something that feels more like a hand tool that doesn't demand a lot of their attention and lets them focus on writing their code, even if the writing process is a little more manual labor in return.
tialaramex 7 hours ago [-]
> And, yet, if you like one, there's a very good chance you like the other.
Among my friends I know several people who are very enthusiastic about board games and several who are very enthusiastic about craft beer, but there's not a particular noticeable overlap. Personally of course I am very into board games and I don't drink at all.
> I would argue that C is not a minimalistic language either. There is a lot under the hood in C.
Nah, C actually is small, that's why K&R is such a short book. It makes enormous compromises to pull that off, but presumably on a machine where 64kB of RAM is extraordinary these compromises made lots of sense. C23 is quite a bit bigger, for example "bool" is now an actual type (albeit implicitly convertible) but still small by modern standards.
There really isn't that much "under the hood", it's often just the least possible moving parts that could possibly have worked.
a[b] in C++ is a call to a member function a.operator[](b) -- arbitrary user code
a[b] in Rust is a call to core::ops::Index::index(a, b) or, in context IndexMut::index_mut -- again, arbitrary user code
a[b] in C is just a pointer addition of a and b - one of them will be converted to a pointer if necessary, and then the other one is added to the pointer using normal pointer arithmetic rules
pjmlp 26 minutes ago [-]
Except C in 2025 isn't K&R C, rather C23.
Also even between C89 and C23, many folks wrongly count "whatever my compiler does" as C, and there are endless amounts of extensions to be aware of.
hajile 5 hours ago [-]
> Nah, C actually is small,
I'd argue that C is much bigger than K&R, but that isn't immediately visible to a new programmer because it's all undefined behaviors.
pjmlp 28 minutes ago [-]
C23 + compiler extensions versus K&R C, is a little more than only UB.
estebank 8 hours ago [-]
> Maybe sometimes the language feels like trying to keep a piece of heavy duty machinery from killing you, but they're willing to wrestle with it for the power you get in return.
It's funny because to me there's an analogy with heavy machinery but materially different: there are some industrial machines that have two buttons that need to be actuated to activate the mechanism, separated by arm length in order to ensure that the operator's arms are out of the way when the limb crunching bits are moving. I see Rust that way, engineering the safe way to do things as the path of least resistance, at the cost of some convenience when trying to do something "unsafe".
pcwalton 7 hours ago [-]
The reason I focus on the memory safety difference is precisely to not minimize its importance. It's far more salient of a difference than whether a language "feels big" or not. Talking about whether Zig requires "a little more manual labor" than Rust is missing the enormous elephant in the room.
platinumrad 6 hours ago [-]
> It's far more salient of a difference than whether a language "feels big" or not.
You clearly don't like it, but it seems many people disagree.
throwawaymaths 6 hours ago [-]
pcwalton is responsible for a lot of the rust borrow checker, so, not a neutral opinion. ive posted it too many times on this thread but it seems borrow checking analysis may be possible for zig (if the zig team should want to)
pcwalton 5 hours ago [-]
I highly suspect it won't be feasible for the same reason it isn't feasible in C++: you could technically implement it, but tons of existing patterns in the ecosystem would become impossible to express, so in practice it would end up creating a different language. From a skim, the CLR project you linked to claims that metadata will probably be needed in order to enforce aliasable xor mutable, and I agree.
zozbot234 5 hours ago [-]
> tons of existing patterns in the [C/C++] ecosystem would become impossible to express
Well, the really harsh way of putting this is that the patterns break for a reason; they rely on global claims about the program, so they aren't genuinely robust in the context of code that sits within a large, constantly evolving codebase that can't be practically surveyed in its entirety. Rust is very good at picking patterns that can be verified with a comparatively straightforward, "local" analysis that broadly follows the same structure as the actual program syntax. Safety claims that rely on "global" properties which cannot be kept within a self-contained, module-like portion of the code are essentially what the unsafe marker is intended for. And this is exactly what idiomatic C/C++ code often gives you.
This is actually why I think that proposals like Safe C++ should get a lot more attention that they do at present. Yes, Safe C++ changes what's idiomatic in the language but it does so in a way that's broadly sensible (given our increased attention to memory safety) especially in a context of "programming in the large".
throwawaymaths 4 hours ago [-]
you can go a long way before getting to aliasable xor mutable, and the metadata doesn't require a language change, theres an example in there on how to bind metadata with no language changes.
voidhorse 3 hours ago [-]
Exactly. There are two kinds of programmers, those who enjoy spending a bunch of time thinking about problems and decisions the language has foisted upon you (subtyping hierarchies, lifetime annotations, visibility, design patterns) and there are those that like spending that time thinking about the actual problem they want to solve instead.
I jest, but only a tiny bit. The features of heavy OOP and feature-rich languages tend to show their value only in really large codebases being worked on by several different people—precisely because many of their features are just guardrails to make it hard for people to code incorrectly against another's understanding or assumptions, when shared understanding is more difficult to establish. Contrarily, any solo programmer or really small team is almost invariably better served by a language like go, C, scheme, or Zig.
pcwalton 26 minutes ago [-]
The idea that memory safety is only, or primarily, a problem in large codebases written by a sizable team is an interesting theory (sort of an inverse Linus' Law?) Unfortunately, it's contradicted by decades of experience.
tialaramex 8 hours ago [-]
Today all of the code I'm not paid to write is in Rust. I spent many happy years previously getting paid to write C (I have also been paid to write Java, PHP, Go and C#, and I have written probably a dozen more languages for one reason or another over the years but never as specifically a thing people were paying me to do)
I always thought C++ was a terrible idea, from way before C++ 98, I own Stroustrup's terrible book about his language, which I picked up at the same time as the revised K&R and it did nothing to change that belief, nor have subsequent standards.
However, I do have some sympathy for this sentiment about Rust being a better C++. Even though Rust and C++ have an entirely different approach to many important problems the syntax often looks similar and I think Zig manages to be less intimidating than Rust for that reason if you don't want that complexity.
Personally I had no interest in C++† and I have no serious interest in Zig.
† Ironically I ended up caring a lot more about C++ after I learned Rust, and most specifically when understanding how Rust's HashMap type works, but I didn't end up liking C++ I just ended up much better informed about it.
gens 7 hours ago [-]
It is not about memory safety or anything like that. It is about simplicity.
If you say "you can't do x with y in C++" you will get an "yes you can, you just use asd::dsadasd::asdadqwreqsdwerig_hfdoigbhiohrf() with weaorgoiawr flag". From what I have seen from Rust, it is similar.
I don't want to fill my brain with vim bindings.. cough.. Rust ways of doing something. I just want to code my hobby game engine v7.
That said, I am happy to use software written in it. Even though the evangelists can be really annoying.
zamalek 9 hours ago [-]
> † Reference counting is a form of garbage collection.
I agree with Raymond Chen's take on the academic definition of GCs[1], and therefore Rust is certainly a GC'd language (because your code behaves as though memory is infinite... usually). It's probably one of the first examples of "static garbage collection" - though I'm sure someone will point out a prior example.
This feels like deliberate misdirection, because in most practical cases what people mean is "some memory management process that will slow my program down non-trivially". By this definition Rust does not have GC whereas Go, eg, does.
The "simulates infinite RAM" is an interesting perspective but simply not the subject of most conversations.
zozbot234 8 hours ago [-]
Manual heap allocation can slow the program down non-trivially compared to using an arena which is cleaned up all at once; hence, manual heap allocation is a kind of GC. Checkmate atheists.
caspper69 4 hours ago [-]
I went down a path researching the viability of region based memory management (a form of arenas).
A language based on such a paradigm can be provably memory safe, and regions can have their own allocators and optionally provide locking when the regions are shared.
This approach obviates the need for reference counting individual allocations (since regions are tracked as a whole), but it suffers from excess memory usage in the event of many short-lived allocations (i.e. they leak until the entire region's allocations go out of scope). But those types of memory accesses can be problematic in every systems language as they can eventually cause memory fragmentation.
That problem can be minimized using per-allocation reference counting, but that can incur a heavy performance hit. Although not having to use it everywhere could minimize the impact.
The plus side is you don't have to worry about borrow checking, so such a language can be more flexible than Rust, while still maintaining the memory safety aspect.
The question, as always, is: is the juice worth the squeeze?
Truthfully, I suspect no. The Rust train has left the station and has a decade head start. Even if it is a pain in the ass, lol.
zozbot234 4 hours ago [-]
You may be interested in Rust language proposals for local memory allocators and "storages"; they may be enough for something very much like this. The lifetime concept in Rust is quite close already to that of a "region", i.e. an area of memory that the lifetime can pertain to.
caspper69 4 hours ago [-]
Depending on the semantics of the implementation, something like that would go a long way toward eliminating one of my biggest issues with Rust. For a low-level systems language, it is imperative to offer 2 things, which are currently a pain in Rust: (1) you must be able to "materialize" a struct at an arbitrary location; why? Because hardware tables exist at a specified location and are provided by hardware- they are not created or instantiated in the host language; and (2) be able to reference structs from other structs, which immediately triggers lifetime annotations, which begin to color everything they touch, must like async does to functions.
And I admit, I loathe the borrow checker. Ironically, I never really have problems with it, because I do understand it, it's just that I find it too limiting. Not everything I want to do is unsafe, and I hate the way it has made people think that if you really do know better than the borrow checker, you must clearly be doing something wrong and you should re-architect your code. It's insulting.
nemetroid 8 hours ago [-]
By that definition, a C++ program with heavy usage of std::shared_ptr has GC.
cmrdporcupine 7 hours ago [-]
And it does. Reference counting is garbage collection. And std::shared_ptr, or Rust's Rc/Arc are basically lightweight GC runtimes inside your program.
coldtea 7 hours ago [-]
Isn't that the point?
defen 7 hours ago [-]
> There is no corresponding popular replacement for C that's more minimalist than Rust and memory safe.
In the real world, memory safety is not all-or-nothing (unless you're willing to concede that Rust is not safe either, since unsafe Rust exists). I'm working on an embedded project in Rust and I'd MUCH rather be using Zig. The only safety thing that Rust would give me that Zig does not is protection from returning pointers to stack-allocated objects (there are no dynamic allocations and no concurrency outside of extremely simple ISRs that push events onto a statically allocated queue). But in exchange I have to deal with the presence of unsafe Rust, which feels like a gigantic minefield even compared to C.
throwawaymaths 6 hours ago [-]
protection from returning stack pointers seems to be detectable with static analysis of zig AIR.
> But in exchange I have to deal with the presence of unsafe Rust, which feels like a gigantic minefield even compared to C.
I think idiomatic coding norms for unsafe Rust are still a bit half-baked, and this is where something like Zig can have an advantage of sorts. You can see this also, e.g. in the ongoing proposals for a Pin<> alternative.
coldtea 7 hours ago [-]
The similarity between C++ and Rust is that both bust your balls with complexity for programming at large. And the inverse goes for C and Zig.
Those are the axes relevant to the parent in the context of their comment - not specific language semantics or core features.
vitiral 4 hours ago [-]
From what I understand that language is Zig, no?
Also, there's FORTH!
wtetzner 4 hours ago [-]
I don't think Zig is memory safe?
vitiral 3 hours ago [-]
I was replying to this bit
> difference between C++ and C is that the former is a large multi-paradigm language, while the latter is a minimalist language. These are completely different axes.
> There is no corresponding popular replacement for C that's more minimalist than Rust and memory safe.
Edit: oh, I never read the last bit "and memory safe" -- well ya, that's kind of rust's major advantage.
hansvm 9 hours ago [-]
It's not a perfect analogy, but if you want to put yourself in the shoes of the people making it:
1. Rust is an immensely complicated language, and it's not very composable (see the async debacle and whatnot). On the simple<->complex slider, it's smack dab on the right of the scale.
2. Ignoring any nitpicking [0], Zig is memory-safe enough in practice, placing it much closer to Rust than to C/C++ on the memory safety axis. My teammates have been using Zig for nearly a year, and the only memory safety bug was (a) caught before prod and (b) not something Rust's features would have prevented [1]. The `defer` and `errdefer` statements are excellent, and much like how you closely audit the use of `unsafe` in Rust there is only a small subset of Zig where you actually need to pull your magnifying glass out to figure out if the code has any major issues. In terms of memory issues I've cared about (not all conforming to Rust's narrow definition of memory safety), I've personally seen many more problems in Rust projects I contribute toward (only the one in Zig, plus a misunderstanding of async as I was learning the language a few years ago, many of varying severity in Rust, at this point probably more code written in Zig than Rust, 10yoe before starting with either).
With that in mind, you have C/C++ on the unsafe axis and Zig/Rust on the safe axis. The complexity axis is self-explanatory, fleshing out the analogy.
Is Zig memory-safe? No, absolutely not. Does that mean that Rust will win out for some domains? Absolutely. In practical terms though, your average senior developer will have many memory safety bugs in C/C++ and few in Zig/Rust. It's a reasonable way to compare and contrast languages.
Is it a perfect description? No, the map is not the territory. It's an analogy that helps a lot of people understand the world around them though.
[0] Even Python is simpler than Rust, and it's memory-safe. If we're limiting ourselves to systems languages, you still have a number of options like Ada and Coq. Rust is popular because it offers a certain tradeoff in the safety/performance/devex Pareto curve, and because it's had a lot of marketing. It's unique in that niche, by definition, but it's far from the only language to offer the features you explicitly stated.
[1] It was just an object pool, and the (aggregate) resetting logic wasn't solid. The objects would have passed through the borrow checker with flying colors though.
Edit: To your GC point, many parts of Rust look closer to GC than not under the hood. You don't have a GC pause, but you have object pools (sometimes falling back to kernel object pools) and a variety of allocation data structures. If RC is a GC tactic, the extra pointer increment/decrement is negligible compared to what Rust actually does to handle its objects (RC is everything Rust does, plus a counter). That's one of my primary performance complaints with the language, that interacting with a churn of small objects is both expensive and the easiest way to code. I can't trust code I see in the wild to behave reasonably by default.
pcwalton 8 hours ago [-]
I've been seeing C++ fans talk about how they never see memory safety issues in practice in C++ for a decade and a half. I even believed it sometimes. But if there's ever been a common story over those 15 years, it's that these anecdotes mean little, and the actual memory safety property means a lot.
The only time I've seen "almost memory safe" actually work is in Go and Swift, which have memory safety problems with concurrency, but they're actually rare enough not to matter too much (though in Go's case it would have been easy to design interfaces and slices not to have the problem, and I wish they had). I simply don't believe that Zig is meaningfully more memory safe than C++.
Is your opinion based on anything other than pure speculation?
I think it makes more sense to form an opinion after actually having tried Rust, C++, and Zig in earnest.
There are lots of us out there who've done it. Join us!
fweimer 8 hours ago [-]
I think it's possible to retrofit race-safe slices and interfaces into Go, and I expect it to happen one day once some actually relevant code execution exploit shows up.
There's going to be some impact and low-level libraries that manipulate directly the words that constitute slices and interfaces, and there will some slight performance impact and increase in memory usage, but hopefully nothing drastic.
zozbot234 8 hours ago [-]
I think slices and interfaces in Go are both 2×usize width, in which case you just need double-width CAS to make them safe from data races - which most modern architectures support.
fweimer 8 hours ago [-]
Slices are base pointer, length, and capacity, so three words. But there's a garbage collector, so you can install a thunk if the slice is updated in a way that torn reads cause problems. It makes reads slightly slower, of course.
Two-word loads could be used for interface values (assuming that alignment is increased), except if support older x86-64 is needed. There are implementations that require using CMPXCHG16B, which is a really slow way to load two words. Both Intel and AMD updated their ISA manuals that using VMOVDQA etc. is fine (once the CPU supports AVX and the memory is cacheable).
FpUser 7 hours ago [-]
>"I've been seeing C++ fans talk about how they never see memory safety issues in practice in C++ for a decade and a half. I even believed it sometimes. But if there's ever been a common story over those 15 years, it's that these anecdotes mean little, and the actual memory safety property means a lot."
While I use C++ a lot I am not a fan. It is just one of many languages I use. But from my personal experience it is true. I frankly forgot when was the last time I hit memory problem, years for sure. And my code is often a stateful multithreaded backends with high request rate.
zozbot234 8 hours ago [-]
> see the async debacle and whatnot
The async featureset in Rust is far from complete, but async is also somewhat of a niche. You're not necessarily expected to use it, often you can just use threads.
> Zig is memory-safe enough in practice
Temporal safety is a huge deal, especially for the "programming in the large" case. Where unstated assumptions that are relied upon for the safety of some piece of code can be broken as some other part of the code evolves. The Rust borrow checker is great for surfacing these issues, and has very few practical alternatives.
> If we're limiting ourselves to systems languages, you still have a number of options like Ada and Coq.
It's easy to be "safe" if the equivalent to free() is marked as part of the unsafe subset, as with Ada. Coq is not very relevant on its own, though I suppose it could be part of a solution for proving the memory safety of C code. But this is known to be quite hard unless you do take your care to write the program in the "idiomatically safe" style that a language like Rust points you to.
emporas 58 minutes ago [-]
> 1. Rust is an immensely complicated language, and it's not very composable.
It is a procedural language and as such, composition of functions is not implemented easily, if at all possible. It is a shame for sure that such powerful techniques are not possible in Rust, but for some people it is worth the trade off.
zozbot234 9 minutes ago [-]
It is implemented. But it requires quite a bit of boilerplate if you want to compose arbitrary closures at runtime (as you would in most FP languages), because that involves non-trivial overhead and Rust surfaces it in the code (see "dyn Fn trait objects" for how that works in detail).
throwawaymaths 7 hours ago [-]
even so it seems like it would eventually be possible to do more safety analysis for zig, if not in the compiler itself:
> On the simple<->complex slider, it's smack dab on the right of the scale.
Sure, but now you snuck in an Artifact of Death (in TvTropes sense) into your codebase. It won't kill your code base immediately and with proper handling it might work, but all it takes is one mistake, one oversight, for it to cause issues.
hansvm 7 hours ago [-]
I agreed with you either way, but I'm currently at a loss as to whether this is a for/against take on rust.
Ygg2 6 hours ago [-]
You traded ease of writing for ease of long term ease of debugging, I don't think we agree.
sesm 6 hours ago [-]
> There is no corresponding popular replacement for C that's more minimalist than Rust and memory safe.
There is Objective-C, it fits your definition of memory safety.
pcwalton 1 hours ago [-]
Objective-C and Swift are the reason why I added the footnote. (Also Objective-C is very, very much not memory safe.)
anacrolix 9 hours ago [-]
This is actually accurate. While Rust is a great replacement for C, so is C++. But Rust has tons of extra features that C doesn't have, just like C++. Moving from C++ to Rust is a sideways move. Moving from C to Rust is definitely an increase in complexity and abstraction.
Rust is a great replacement for C++ as it fits into the same place in the stack of tools.
Go is not a C or Python replacement.
Zig is a good replacement for C.
pjmlp 21 minutes ago [-]
Go has plenty of bad design decisions, but not being a safer C isn't one of them.
TinyGo and TamaGo folks enjoy writing their bare metal applications on embedded and firmware.
tialaramex 6 hours ago [-]
> Go is not a C or Python replacement.
Google stopped writing new Python, because they found that the same engineers would produce software with similar defect rates, at a similar price and in similar time, with Go, but it would have markedly better performance, so, no more Python.
Go manages to have a nice sharp start, which means there are going to be a bunch of cases where you didn't need all the perf from C, but you did need a prompt start, so, Java was not an option but Go is fine.
In my own practice, Rust entirely replaces C, which has previously been my preferred language for many use cases.
zozbot234 9 hours ago [-]
> But Rust has tons of extra features that C doesn't have, just like C++.
C++ has tons of extra features over C because it's a kitchen sink language. Rust has some extra features over C because in order to support static analysis for memory safety in a practically usable manner, you need those features. In practice, there are lots of C++ features that Rust doesn't bother with, especially around templates. (Rust just has a more general macro feature instead. The C++ folks are now on track to adding a clunky "metaclass" feature which is a lot like Rust macros.)
pjmlp 12 minutes ago [-]
Every language becomes a kitchen sink with enough time on the market, even C, people really should learn about the standard and the myriad of compiler extensions, K&R C was almost 60 years ago.
C++ already has almost Rust macros, with a mix of compile time execution, concepts and traits, without requiring another mini-language, or an external crate (syn).
xedrac 5 hours ago [-]
> Moving from C++ to Rust is a sideways move.
In what sense? Features or complexity? From a productivity/correctness perspective, Rust is a huge step up over C++.
pjmlp 8 minutes ago [-]
While I agree, that isn't much the case for those using static analysis, and hardened runtimes, which provide much of the same safety improvements, granted without lifetimes tracking.
Use more xcode, Clion, Visual Studio, C++ Builder, and less vi and Emacs for C and C++.
Not everyone has the RIIR luxury for what we do.
fc417fc802 10 hours ago [-]
Somewhat related, even when I work with C++ I use it as "C with RAII". What I actually want is a scheme (R5RS) with manual memory management and a borrow checker. I don't know how well such a monstrosity would actually work in practice, but it's what I've convinced myself that I want.
I read a blog that called Rust a language that was aimed to be high-level but without GC.
superlopuh 9 hours ago [-]
My experience with Zig is that it's also a plausible replacement for C++
fuzztester 8 hours ago [-]
explanation needed, bro.
nerdy langnoob or noobie langnerd here. not sure which is which, cuz my parsing skills are nearly zilch. ;)
chongli 6 hours ago [-]
Zig’s comptime feature gives a lot of the power of C++ templates. This makes it easier to implement a lot of libraries which are just a pain to do in plain C. Even seemingly simple things like a generic vector are annoying to do in C, unless you abandon type safety and just start passing pointers to void.
I believe Zig even has some libraries for doing more exotic things easily, such as converting between an array of structs and a struct of arrays (and back).
fuzztester 13 minutes ago [-]
>Zig’s comptime feature gives a lot of the power of C++ templates. This makes it easier to implement a lot of libraries which are just a pain to do in plain C. Even seemingly simple things like a generic vector are annoying to do in C, unless you abandon type safety and just start passing pointers to void.
I get it, thanks. in C you have to cast everything to (void *) to do things like a generic vector.
>I believe Zig even has some libraries for doing more exotic things easily, such as converting between an array of structs and a struct of arrays (and back).
yes, iirc, there was a zig thread about this on hn recently.
pjmlp 8 minutes ago [-]
Depends on which C++ version you're talking about.
infamouscow 7 hours ago [-]
C programmers have long suffered the cumbersome ritual of struct manipulation—resorting to C++ merely to dress up structs as classes.
Zig shatters that with comptime and the 'type' type.
FpUser 7 hours ago [-]
>"but Rust is actually a replacement for C++. "
I would disagree. C++ provides way more features than Rust and to me Rust feels way more constrained comparatively.
Many of my favourite Rust features aren't in C++. For example they don't have real Sum types, they don't have the correct move semantic, they are statement oriented rather than expression oriented and their language lacks decent built-in tooling.
But also, some of the things C++ does have are just bad and won't get fixed/ removed because they're popular. And there's no sign of it slowing down.
I think that in practice you should use Rust in most places that C++ is used today, while some of the remainder should be WUFFS, or something entirely different.
pjmlp 5 minutes ago [-]
The biggest issue for me, is that Rust isn't used where I care about, mainstream language runtimes, compiler toolchains, and GPU ecosystem standards.
No one from them would accept a patch in Rust instead of C or C++.
Yes I know about Deno, but even them haven't rewriten V8, and wgpu or Rust CUDA aren't the same as what Khronos, NVidia, AMD, Microsoft put out in C++.
FpUser 5 hours ago [-]
>"I think that in practice you should use Rust"
I am doing fine. Thank you. Also I am not a language warrior. I do have my preferences but use many since I am an independent and work with many clients.
christophilus 9 hours ago [-]
I've played with Hare, Zig, and Odin. Odin is my favorite. It's a fair bit faster to compile (similar to Hare), and has the nicest syntax (subjectively speaking). I wish it would get more traction. Looking forward to trying Jai if it ever makes it to GA.
wolfspaw 6 hours ago [-]
Odin is the best (followed by Zig)
Odin has the best approach for "standard library" by blessing/vendoring immensely useful libraries
Odin also has the best approach for Vector Math with native Vector and Matrix types
tialaramex 6 hours ago [-]
The swizzling is pretty cool, rather special purpose but it makes sense that Ginger Bill wanted this and it's Ginger Bill's language.
Odin's "standard library" stuff is very silly, it still feels like we were just copy-pasted the lead developer's "useful stuff" directory. Bill doesn't feel there's a value to actual standard libraries, so instead here's... whatever. He insists that's not what's going on here, but that doesn't change how it feels.
smartmic 9 hours ago [-]
From what you describe, you might also like Odin.
zoogeny 9 hours ago [-]
Interestingly enough, it is one of the only "talked about" languages I have almost no experience with. Even Roc I've watched a few YouTube videos on. I've only really seen Odin mentioned on X, not even a HN post.
I suppose there is also Jai in a similar space as well, although I'm not a devotee to Jonathan Blow and I don't share much of the excitement his followers seem to have.
I do feel Zig has the current trend moving in its favor, with projects like Ghostty and Bun gaining prominence. I think Odin would need something like that to really capture attention.
dismalaf 9 hours ago [-]
Odin has commercial applications, basically all the JangaFX apps... But yeah, it's missing a killer open source app. Positive is it has a decent amount of batteries included.
LAC-Tech 9 hours ago [-]
I don't say the "lower language wars" as being between zig and rust. I see it as being between old and new. And I'm on team new.
greener_grass 11 hours ago [-]
I can't help but feel like the Roc team has an attitude of "imperative programming for me, but not for thee".
And now they are doubling down on that by moving from "OCaml meets C++" to "C, the good parts"!
If FP isn't good for writing a compiler, what is it good for?
isaacvando 9 hours ago [-]
Roc couldn't be optimized for writing the Roc compiler without sacrificing some of its own goals. For example, Roc is completely memory-safe, but the compiler needs to do memory-unsafe things. Introducing memory-unsafety into Roc would just make it worse. Roc has excellent performance, but it will never be as fast as a systems language that allows you to do manual memory management. This is by design and is what you want for the vast majority of applications.
There are a number of new imperative features that have been (or will be) added to the language that capture a lot of the convenience of imperative languages without losing functional guarantees. Richard gave a talk about it here: https://youtu.be/42TUAKhzlRI?feature=shared.
int_19h 53 minutes ago [-]
It still feels kinda weird. Parsers, compilers etc are traditionally considered one of the "natural" applications for functional programming languages.
Muromec 7 hours ago [-]
>but the compiler needs to do memory-unsafe things
"The split of Rust for the compiler and Zig for the standard library has worked well so far, and there are no plans to change it."
I assume that statement will need updating.
WD-42 8 hours ago [-]
At this point it feel like the are just playing with languages.
munificent 7 hours ago [-]
I mean, the language doesn't even have a version number yet. We should expect them to still be exploring the design space.
chubot 9 hours ago [-]
That's an interesting point, and something I thought of when reading the parser combinator vs. recursive descent point
Around 2014, I did some experiments with OCaml, and liked it very much
Then I went to do lexing and parsing in OCaml, and my experience was that Python/C++ are actually better for that.
Lexing and parsing are inherently stateful, it's natural to express those algorithms imperatively. I never found parser combinators compelling, and I don't think there are many big / "real" language implementations that uses them, if any. They are probably OK for small languages and DSLs
I use regular expressions as much as possible, so it's more declarative/functional. But you still need imperative logic around them IME [1], even in the lexer, and also in the parser.
---
So yeah I think that functional languages ARE good for writing or at least prototyping compilers -- there are a lots of examples I've seen, and sometimes I'm jealous of the expressiveness
But as far as writing lexers and parsers, they don't seem like an improvement, and are probably a little worse
So why is it a non-goal of Roc to be implemented with this approach?
coffeeaddict1 9 hours ago [-]
Have you not read the post? Because compile times. Rust has awful compile times. If it didn't, I'm sure the Roc team would have stayed with Rust.
munchler 10 hours ago [-]
As an FP fan, I agree. This is disappointing.
gregwebs 10 hours ago [-]
Go was built while waiting for C++ to compile- fast compilation was an implicit design goal.
Rust on the other hand didn’t prioritize compile times and ended up making design decisions that make faster compilation difficult to achieve. To me it’s the biggest pain point with Rust for a large code base and that seems to be the sentiment here as well.
pcwalton 10 hours ago [-]
Rust always cared about compile times. Much of the motivation to switch from rustboot to rustc was compile times.
(Source: I wrote much of the Rust compiler.)
kettlecorn 4 hours ago [-]
The Rust ecosystem has long downplayed the importance of compile times.
Many foundational crates, serde for example, contribute much more to compile times than they need to.
I spent a long time reinventing many foundational rust crates for my game engine, and I proved its possible to attain similar features in a fraction of the compile time, but it’s a losing battle to forgo most of the ecosystem.
bhansconnect 8 hours ago [-]
Thank you for your work! Rust is still a great language.
I think a significant portion of our pain with rust compile times is self inflicted due to the natural growth of our crate organization and stages.
I still think the rewrite in zig is the right choice for us for various reasons, but I think at least a chunk of our compile times issues are self inflicted (though this happens to any software project that grows organically and is 300k LOC)
eximius 9 hours ago [-]
But were any _language_ decisions discarded due to compile time concerns? I don't think anyone would claim the folks working on the rust compiler don't care.
On that note, thank you for your part! I sure enjoy your work! :)
LegionMammal978 3 hours ago [-]
Since at least 2018, there's been a planned upgrade to the borrow checker to permit certain patterns that it currently rejects [0]. Also since 2018, there's been an implementation of the new version (called Polonius) that can be enabled with an unstable compiler flag [1].
But now it's 2025, and the new version still hasn't been enabled on stable Rust, since the Polonius implementation is seen as far too slow for larger programs. (I'm not exactly sure how horrible it really is, I haven't looked at the numbers.) A big goal of the types team since last year has been to reimplement a faster version within rustc [2].
I'd count this as a language feature (albeit a relatively minor one) that's been greatly deferred in favor of shorter compile times.
Yet, the whole crate model giving up on ABI stability as a goal hurts a lot, both for performance and for sanity.
Why aren't people designing modern languages to make it easier to keep a stable ABI, rather than giving up entirely?
pcwalton 9 hours ago [-]
ABI stability is one of those things that seems obvious ("sane"), until you try to implement ABI-stable unboxed generics and discover that the amount of complexity you incur, as well as the performance tax, is absurd. The code bloat with the resulting "intensional type analysis" actually makes the compile time worse (and the runtime far worse) than just specializing generics in many cases.
throwaway17_17 3 hours ago [-]
In this case (i.e. Rust specifically) what do unboxed generics mean. Without actually knowing Rust I don’t think I can analogize to either type theory or another language. I assume if I can figure out what they are I infer why they are difficult to compile.
zozbot234 19 minutes ago [-]
It means ABI-stable interfaces (i.e. interfaces between separately-compiled "crates"/libraries, including dylibs and shared objects) can involve arbitrarily complex types with nested generics, and these are implemented behind a single pointer dereference at most. This requires something a lot like dyn trait object vtables in Rust ("witness tables") except quite a bit more complex.
zozbot234 9 hours ago [-]
The Swift folks seem to manage just fine.
pcwalton 8 hours ago [-]
The amount of complexity that Swift has to deal with in order to make ABI stability work is exactly what I'm talking about. It's astronomical. Furthermore, it's only partial: there are some generics in which they couldn't afford the price of ABI stability, so they're marked inline and aren't ABI stable.
From listening to Feldman's podcast, this doesn't really come as a surprise to me. The rigor that Rust demands seems not to jibe with his 'worse is better' approach. That coupled with the fact they already switched the stdlib from Rust to Zig. The real question I have is why he chose Rust in the first place.
bhansconnect 8 hours ago [-]
Zig was not ready or nearly as popular back in 2019 when the compiler was started.
Not to mention, Richard has a background mostly doing higher level programming. So jumping all the way to something like C or Zig would have been a very big step.
Sometimes you need a stepping stone to learn and figure out what you really want.
christophilus 8 hours ago [-]
It’s covered in TFA, but the tldr is they started when Zig was immature.
3836293648 3 hours ago [-]
Maybe I need to give it another shot, but Zig was a terribly unpleasant language last I tried to use it.
Their motivation seems to justify the decision, but I'm pretty sure this will scare away potential contributors. I've been following Roc personally, and probably will continue to do so, but this definitely has killed a lot of interest for me.
chambers 10 hours ago [-]
This decisions looks well-reasoned. It acknowledges the strengths of their current implementation, and explains why another language may serve their goals even better. I can follow along as someone interested in general software decision-making, even though I'm not particularly invested in either language.
One specific thing I like is that the motivation for this rewrite is organic, i.e., not driven by an external pressure campaign. It's refreshing to see a drama-free rewrite.
debugnik 10 hours ago [-]
> we also want to convert [the parser] to use recursive descent; it was originally written using parser combinators
Almost all parser combinators are recursive descent with backtracking, they just add higher-order plumbing.
I have a feeling that whatever issue they've encountered with the combinator approach could have been worked around by handwriting only a few pieces.
bhansconnect 8 hours ago [-]
This is more about simplicity, maintainability, and possiblity for new contributors to easily jump in and fix things. Our current parser is not fun for new contributors to learn. Also, I think parser combinators obsficate a lot of the tracking required for robust error messages.
Personally, I find parser combinators nice for small things but painful for large and robust things.
wirrbel 10 hours ago [-]
When I used parser combinations in rust years ago the compile times were really long. In also think it’s a strange reason to move away from rust as a language.
I remember reading the HN thread when Fish rewrote to Rust. A big reason because “they wanted to stay current” so they rewrote from C++ to Rust.
Someone made a joke comment how in a few years there would be tons of HN threads about rewriting Rust to Zig.
wiz21c 9 hours ago [-]
I use rust a lot but, Roc is right: compilation times are not proportional to the safety you gain. At least in my project.
Gibbon1 9 hours ago [-]
I had an issue where the corporate spyware thought my cross compiler was potential malware and running scans on every file access. My compile times went from a few seconds to 2 minutes. And my codebase has a dozen targets. Where building, fixing, running tests, checking in would take half and hour it now took the whole afternoon.
I think I'd rather stick with C and run static analysis tools on my code when generating release candidates than have to deal with that.
Lyngbakr 11 hours ago [-]
The rationale given makes perfect sense, but isn't it risky to rewrite in a language that is yet to hit 1.0?
bhansconnect 8 hours ago [-]
For sure, but we have a really good relationship with the zig folks and they are willing to help us out.
On top of that, zig has gotten a lot more robust and stable of the last few releases. Upgrading is getting smoother.
But yeah, it is a risk.
stevage 9 hours ago [-]
Much better before than after!
Lyngbakr 9 hours ago [-]
Why? There's the possibility that Zig will introduce breaking changes and the Roc compiler will have to be revised in light of those changes.
brokencode 9 hours ago [-]
I’m assuming the previous commenter thought you were referring to Roc being pre v1 and not Zig.
There are probably some risks to it. And I think that you wouldn’t want to release a Roc v1 before Zig v1 as well.
But if things are working well now, you could always stay on an older version of Zig while you rewrite things for breaking changes in a newer version.
Still potentially a pain, but Rust is post v1 and they ended up deciding to rewrite anyway, so there are no guarantees in any approach.
bmacho 11 hours ago [-]
Strangely they don't mention that (allegedly? According to them at least) they already use Zig for the standard library[0]. Which was also started to be written in Rust, then it got rewritten in Zig:
> (and to unblock updating to the latest Zig version, which we've use for our stdlib for years)
It's somewhere in the middle of the list of the "Why now?" section.
thegeekpirate 11 hours ago [-]
They do mention it but there's only 10k lines of Zig, so it's hardly a blip in the grand scheme of things.
pjmlp 11 hours ago [-]
The way compile times are stressed all over the place is a very good point.
Papercuts like this do make or break adoption.
Even C++ is improving on that front.
pcwalton 10 hours ago [-]
As is Rust. And the C++ compiler performance improvements in LLVM apply to Rust.
pjmlp 50 minutes ago [-]
I was talking about C++ 20 modules and C++23 modularised standard library, and that has little to do with LLVM, rather clang.
LLVM improvements will do little to improve Rust, because the major issue is the lack of parallelism and massive amount of LLVM IR that the fronted gives to the backend to handle.
pcwalton 35 minutes ago [-]
> LLVM improvements will do little to improve Rust
Falsified many times during Rust's development.
nurettin 11 hours ago [-]
ccache made C++ usable for me.
pjmlp 47 minutes ago [-]
It has been mostly usable to me with binary libraries, precompiled headers, incremental compiling and linking, and now modules.
Although ClearMake object sharing was quite neat experience back in the day.
> While we're at it, we also want to convert it to use recursive descent; it was originally written using parser combinators because that was what I was comfortable with at the time
Aren't parser combinators just one way to do recursive descent?
jjk7 10 hours ago [-]
Explicitly hand-writing the parsing logic allows you to get more specific in error states and messages that a combinator can't.
rubenvanwyk 11 hours ago [-]
After watching YT talks on Roc using Rust and Zig, I always wondered they didn't just opt for Zig only. Interesting to see this.
bhansconnect 8 hours ago [-]
Yeah, the compiler was started in 2019 when zig wasn't nearly as viable of an option. On top of that, you don't necessarily know what you want until you build something.
I'm part of the core team, and I'm honestly still a bit surprised we're switching to zig. I think it makes sense, it just kinda finally aligned and was decided now. I guess enough technical debt piled up and the core of the language was clearly discovered. So we really know what we want now.
cma256 12 hours ago [-]
It will be interesting to compare compile times afterwards. I'm also interested if there's any Rust features that are 'taken for granted' which will make the Zig version less pleasant to read or write. Hopefully the old Rust repo is archived so I can reference it a year from now!
badfishblues 11 hours ago [-]
It will be impossible to ever truly do a one-to-one comparison. First, there may be a ton of code in the existing implementation that gets cut out by a mass rewrite due to better, more informed, architecture decisions. On the other hand, it may be possible that new features find their way into the rewrite that cause the fully Zig implementation to be larger than the original Rust-Zig implementation. I don't think any comparison will be fully "fair". A smaller implementation could be argued as only being possible in Rust or Zig because of its advantages over the other language. I expect results to be controversial.
gauge_field 7 hours ago [-]
I am also curios about at least some benchmark even if it is not one-to-one perfect comparison. There should be at least one case-study where they have shorter feedback loop with Zig than with Rust. It would be interesting to see that.
bhansconnect 8 hours ago [-]
Yeah, the rewrite will clean up a ton of technical debt for us. So it will very much be a bad comparison.
nialv7 9 hours ago [-]
Whenever I see <language A> rewrites its compiler in <language B>, and A != B, I lose a bit of interest in language A.
Hasnep 4 hours ago [-]
Why? Roc has good reasons for not self hosting the compiler.
baq 11 hours ago [-]
A very good discussion and hard to disagree with conclusions. Also, educational.
cardanome 10 hours ago [-]
Richard Feldman is an amazing educator. He got me into Elm back in the days.
I am a bit sad that Roc is following similar pitfalls as Elm in its quest to be simple by rejecting to have more advanced type system features. That just does not work. In dynamic languages you can opt for minimalism because everything goes by default but in fully statically typed languages things can painful quickly. Even golang had to add generics.
Still many amazing ideas in the language. The platform concept is pretty neat. Not doing implicit currying is a good move.
Hasnep 4 hours ago [-]
But Roc already has generics, Go has managed to become very successful despite having a simpler type system than Roc.
bbkane 4 hours ago [-]
What particular type system featured do you miss? I think Elm proved that a restricted type system DOES WORK for large amounts of software. It's been a few years since I've written Elm, so I don't recall specific painful memories I've had with the type system.
One thing that makes Go's restrictive type system more bearable is a fantastic stlib package for analyzing and generating Go code. I wonder if Roc will get anything similar.
forgotmypass1 2 hours ago [-]
What kind of advanced type system features?
petabyt 9 hours ago [-]
Fast compile times in Zig? Last time I tried it was super slow. Has anything changed in recent years?
modernerd 8 hours ago [-]
Last time I tried it was missing both speed and incremental compilation. That still looks to be the case due to no linker support.
Compared to Rust its blazingly fast. Compared to Go compilation its slower.
metaltyphoon 7 hours ago [-]
Non it’s not. I known its not a benchmark but just try a zig init and build their hello world. It takes significantly longer
baazaa 55 minutes ago [-]
think this has something to do with zig building part of the std which other languages ship as binaries. incremental compilation will remove this small overhead.
sweeter 7 hours ago [-]
```sh
sweet@nadeko ~ $ mkcd test
sweet@nadeko ~/test $ zig init
info: created build.zig
info: created build.zig.zon
info: created src/main.zig
info: created src/root.zig
info: see `zig build --help` for a menu of options
sweet@nadeko ~/test $ time zig build
zig build 5.08s user 0.58s system 119% cpu 4.745 total
# cached
sweet@nadeko ~/test $ time zig build
zig build 0.01s user 0.03s system 132% cpu 0.026 total
# after rewriting the main function to call a function that takes a pointer to another function
sweet@nadeko ~/test $ time zig build
zig build 0.02s user 0.03s system 136% cpu 0.032 total
```
rererereferred 9 hours ago [-]
Their x86 backend seems to be more complete, bypassing LLVM and allowing for fast debug builds.
zozbot234 9 hours ago [-]
Rust has a Cranelift-based backend that you can use for that.
bhansconnect 8 hours ago [-]
It sadly helps a lot less than you would hope. A true dev backend can be many times faster than llvm to compile code. Crane lift is more like a 1.5x or 2x. Not to mention a lot of time is spent in the rust frontend.
sweeter 7 hours ago [-]
Its honestly pretty dang good now. I can bootstrap Zig from C in about 3-5 minutes, and then use the self-hosted stage3 compiler to compile Zig very quickly (under 1 minute iirc) Once the cache kicks in, its even faster. I guess my overall gauge on this is the impression I get. Zig build times never annoy me and I never think about it... especially compared to clang, gcc and Rust.
stevage 9 hours ago [-]
If author is reading this, maybe worth a paragraph saying why you're not rewriting in Roc.
teo_zero 9 hours ago [-]
Like this one?
> Roc's compiler has always been written in Rust, and we do not plan to self-host. We have a FAQ entry explaining why, and none of our reasoning has changed on that subject.
because that's hard. also it's one choice I respect them for not making: don't self host until you have actual users and momentum (unless you want to to prove a point). if Roc intends to have industry usage, keeping it in another language for now is a good move.
bhansconnect 8 hours ago [-]
Roc never plans to self host. We want roc the compiler in the long term to be as nice as possible for end users. A big part of that is as fast as possible. While roc can be fast, it can't compete with all the techniques that can be done in rust/zig/c/c++. It fundamentally is a higher level language to increase developer joy and reduce bugs. So it isn't the right tool when you are trying to reach the limits of hardware like this.
hassleblad23 11 hours ago [-]
Zig seems to be a good choice here.
UncleOxidant 11 hours ago [-]
> Rust's compile times are slow. Zig's compile times are fast. This is not the only reason, but it's a big reason.
So why not something like OCaml that has fast compile times and a lot of nice features for writing compilers? (pattern matching, etc.)
theLiminator 11 hours ago [-]
I guess if you want to never limit yourself in terms of performance, you're kinda stuck with either C/C++/Zig/Rust/etc.
GCed languages can be fast, probably even as fast as any of the aforementioned languages with enough engineering effort, but in terms of high performance, it's actually easier to write high performance code in the aforementioned languages rather than in a GCed language.
platinumrad 11 hours ago [-]
The same reason why they're not self-hosting. They want compiler performance to be in the top C, C++, Rust, Zig tier. OCaml isn't slow, but at best it's in the second Java, C#, etc. tier.
whytevuhuni 11 hours ago [-]
Because, like he said, compile times matter.
This includes the Roc compiler too. Zig is significantly faster than OCaml.
LAC-Tech 9 hours ago [-]
The Ocaml compiler is significantly faster than the Zig one. It's one of the fastest I've seen.
whytevuhuni 8 hours ago [-]
Ah, sorry, I was referring to the speed of the resulting Roc compiler.
A Roc compiler written in Zig would compile Roc code significantly faster than a Roc compiler written in OCaml.
The ROC webpage talks about ROC doing web servers via Rust libs...
Behind the scenes, it uses Rust's high-performance hyper and tokio libraries to
execute your Roc function on incoming requests.
Will these be replaced with Zig networking libs?
isaacvando 8 hours ago [-]
Not necessarily! Each Roc app runs on a particular platform which is built in a host language for a specific domain. That host language could be Rust, Zig, C, C++, Go, Swift, etc. It's possible the basic-webserver platform will be rewritten to Zig but it doesn't need to be.
sweeter 9 hours ago [-]
Yay for Zig! I'm a big Zig fan. It's seemless interop with C makes it super fun. I really enjoy writing code in a C-style way. Its still got some rough edges, but its definitely super fun and enjoyable to write imo.
einpoklum 7 hours ago [-]
It is interesting to note an intentional choice to go from a "safe" language to an "unsafe" language for a program whose use is not without security implications (even if it's not network-facing).
marxisttemp 9 hours ago [-]
I’ve been reading a lot about comptime in Zig and it’s really cool, it unifies and simplifies a lot of metaprogramming concepts that normally end up being the ugliest and clunkiest parts of most languages.
I’ve been immersed in Swift for a couple years after working as a Go programmer, and I find myself pining for a language that’s more expressive than the latter and more concise than the former.
betimsl 10 hours ago [-]
I think Go would have been a better choice.
anacrolix 9 hours ago [-]
absolutely not. both because it's awful for implementing parsers, but also it introduces lots of extra hoops for interop.
its build system is also terrible for targeting lots of varying platforms and feature sets.
it has a custom IR that isn't helpful interacting with other systems.
its runtime is really nice but it's a black box and not suitable for interop.
Rendered at 06:56:04 GMT+0000 (Coordinated Universal Time) with Vercel.
I recommend reading Roc's FAQ too - it's got some really great points. E.g. I'm internally screaming YESSS! to this: https://www.roc-lang.org/faq.html#curried-functions
But then it has other weird features too, like they seem to be really emphasising "friendliness" (great!) but then it has weird syntax like `\` for anonymous functions (I dunno where that dumb syntax came from by Nix also uses it and it's pretty awful). Omitting brackets and commas for function calls is also a bad decision if you care about friendliness. I have yet to find a language where that doesn't make the code harder to read and understand.
It’s a syntax that’s several decades old at this point.
It’s different, but not harder. If you learned ML first, you’d found Algol/C-like syntax equally strange.
That's not ML syntax. Haskell got it from Miranda, I guess?
In SML you use the `fn` keyword to create an anonymous function; in Ocaml, it's `fun` instead.
[1]: https://homepages.inf.ed.ac.uk/wadler/papers/papers-we-love/... (can be spotted on page 353)
But yes, the slash is just an ASCII stand-in for a lambda.
ETA: I tracked down a copy of the Edinburgh LCF text and I have to eat crow. It doesn't use a lambda, but it does use a slash rather than a reserved word. The syntax, per page 22, is in fact, `\x. e`. Similar to Haskell's, but with a dot instead of an arrow.
https://archive.org/details/edinburghlcfmech0000gord
https://github.com/roc-lang/roc/releases/tag/0.0.0-alpha2-ro...
The only relevant reason he lists is point-free, but he doesn't go far enough. Point-free very often turns into write-only balls of unmaintainable nastiness. Wanting to discourage this behavior is a perfectly reasonable position. Unfortunately, this one true argument is given the most tepid treatment of all the reasons.
Everything else doesn't hold water.
As he knows way better than most, Elm has auto-curry and has been the inspiration for several other languages getting better error messages.
Any language with higher-order functions can give a function as a result and if you haven't read the docs or checked the type, you won't expect it. He left higher-order function in, so even he doesn't really believe this complaint.
The argument about currying and pipe isn't really true. The pipe is static syntax known to the compiler at compile time. You could just decide that the left argument is applied/curried to the function before the right argument.
I particularly hate the learning curve argument. Lots of great and necessary things are hard to learn. The only question is a value judgement about if the learning is worth the payoff. I'd guess that most of the Roc users already learned about currying with a more popular FP language before every looking at Roc, so I don't think this argument really applies here (though I wouldn't really care if he still believed it wasn't worth the learning payoff for the fraction of remaining users).
To reiterate, I agree with his conclusion to exclude currying, but I wish he were more straightforward with his one good answer that would tick off a lot of FP users rather than resorting to a ton of strawman arguments.
A one-character ASCII rendering of the Greek lowercase letter lambda: λ
λx → x + 5
\x -> x + 5
If anything I don't think Haskell goes far enough the automatic currying, points free stuff. If you're going to be declarative, don't half ass it.
I used to root for the D programming language but it seems to have got stuck and never gained a good ecosystem. I've disliked Rust from the first time I saw it and have never warmed up to it's particular trade offs. C feels unergonomic these days and C++ is overfull with complexity. Zig feels like a nice pragmatic middle ground.
I actually think Rust is probably perfectly suited to a number of tasks, but I feel I would default to choosing Zig unless I was certain beyond doubt that I needed specific Rust safety features.
Recently, a coworker of mine made a great observation that made everything clear to me. I was looking for a replacement for C, but Rust is actually a replacement for C++. Totally different beast - powerful but complex. I need to see if Zig is any closer to C in spirit.
I've seen this sentiment a lot, and I have to say it's always puzzled me. The difference between Rust and basically any other popular language is that the former has memory safety without GC†. The difference between C++ and C is that the former is a large multi-paradigm language, while the latter is a minimalist language. These are completely different axes.
There is no corresponding popular replacement for C that's more minimalist than Rust and memory safe.
† Reference counting is a form of garbage collection.
I think that's where the sentiment comes from. It's not that Rust is similar to C++ in terms of the actual languages and their features. It's that people who like C++ are morely likely to like Rust than people who like C are.
I would argue that C is not a minimalistic language either. There is a lot under the hood in C. But it feels small in a way that Rust and C++ don't.
I think Rust and C++ appeal to programmers who are OK with a large investment in wrapping their head around a big complex language with the expectation that they will be able to amortize that investment by being very productive in large projects over a large period of time. Maybe sometimes the language feels like trying to keep a piece of heavy duty machinery from killing you, but they're willing to wrestle with it for the power you get in return.
The people who are excited about Zig and C wants something that feels more like a hand tool that doesn't demand a lot of their attention and lets them focus on writing their code, even if the writing process is a little more manual labor in return.
Among my friends I know several people who are very enthusiastic about board games and several who are very enthusiastic about craft beer, but there's not a particular noticeable overlap. Personally of course I am very into board games and I don't drink at all.
> I would argue that C is not a minimalistic language either. There is a lot under the hood in C.
Nah, C actually is small, that's why K&R is such a short book. It makes enormous compromises to pull that off, but presumably on a machine where 64kB of RAM is extraordinary these compromises made lots of sense. C23 is quite a bit bigger, for example "bool" is now an actual type (albeit implicitly convertible) but still small by modern standards.
There really isn't that much "under the hood", it's often just the least possible moving parts that could possibly have worked.
a[b] in C++ is a call to a member function a.operator[](b) -- arbitrary user code
a[b] in Rust is a call to core::ops::Index::index(a, b) or, in context IndexMut::index_mut -- again, arbitrary user code
a[b] in C is just a pointer addition of a and b - one of them will be converted to a pointer if necessary, and then the other one is added to the pointer using normal pointer arithmetic rules
Also even between C89 and C23, many folks wrongly count "whatever my compiler does" as C, and there are endless amounts of extensions to be aware of.
I'd argue that C is much bigger than K&R, but that isn't immediately visible to a new programmer because it's all undefined behaviors.
It's funny because to me there's an analogy with heavy machinery but materially different: there are some industrial machines that have two buttons that need to be actuated to activate the mechanism, separated by arm length in order to ensure that the operator's arms are out of the way when the limb crunching bits are moving. I see Rust that way, engineering the safe way to do things as the path of least resistance, at the cost of some convenience when trying to do something "unsafe".
You clearly don't like it, but it seems many people disagree.
Well, the really harsh way of putting this is that the patterns break for a reason; they rely on global claims about the program, so they aren't genuinely robust in the context of code that sits within a large, constantly evolving codebase that can't be practically surveyed in its entirety. Rust is very good at picking patterns that can be verified with a comparatively straightforward, "local" analysis that broadly follows the same structure as the actual program syntax. Safety claims that rely on "global" properties which cannot be kept within a self-contained, module-like portion of the code are essentially what the unsafe marker is intended for. And this is exactly what idiomatic C/C++ code often gives you.
This is actually why I think that proposals like Safe C++ should get a lot more attention that they do at present. Yes, Safe C++ changes what's idiomatic in the language but it does so in a way that's broadly sensible (given our increased attention to memory safety) especially in a context of "programming in the large".
I jest, but only a tiny bit. The features of heavy OOP and feature-rich languages tend to show their value only in really large codebases being worked on by several different people—precisely because many of their features are just guardrails to make it hard for people to code incorrectly against another's understanding or assumptions, when shared understanding is more difficult to establish. Contrarily, any solo programmer or really small team is almost invariably better served by a language like go, C, scheme, or Zig.
I always thought C++ was a terrible idea, from way before C++ 98, I own Stroustrup's terrible book about his language, which I picked up at the same time as the revised K&R and it did nothing to change that belief, nor have subsequent standards.
However, I do have some sympathy for this sentiment about Rust being a better C++. Even though Rust and C++ have an entirely different approach to many important problems the syntax often looks similar and I think Zig manages to be less intimidating than Rust for that reason if you don't want that complexity.
Personally I had no interest in C++† and I have no serious interest in Zig.
† Ironically I ended up caring a lot more about C++ after I learned Rust, and most specifically when understanding how Rust's HashMap type works, but I didn't end up liking C++ I just ended up much better informed about it.
If you say "you can't do x with y in C++" you will get an "yes you can, you just use asd::dsadasd::asdadqwreqsdwerig_hfdoigbhiohrf() with weaorgoiawr flag". From what I have seen from Rust, it is similar. I don't want to fill my brain with vim bindings.. cough.. Rust ways of doing something. I just want to code my hobby game engine v7.
That said, I am happy to use software written in it. Even though the evangelists can be really annoying.
I agree with Raymond Chen's take on the academic definition of GCs[1], and therefore Rust is certainly a GC'd language (because your code behaves as though memory is infinite... usually). It's probably one of the first examples of "static garbage collection" - though I'm sure someone will point out a prior example.
[1]: https://devblogs.microsoft.com/oldnewthing/20100809-00/?p=13...
The "simulates infinite RAM" is an interesting perspective but simply not the subject of most conversations.
A language based on such a paradigm can be provably memory safe, and regions can have their own allocators and optionally provide locking when the regions are shared.
This approach obviates the need for reference counting individual allocations (since regions are tracked as a whole), but it suffers from excess memory usage in the event of many short-lived allocations (i.e. they leak until the entire region's allocations go out of scope). But those types of memory accesses can be problematic in every systems language as they can eventually cause memory fragmentation.
That problem can be minimized using per-allocation reference counting, but that can incur a heavy performance hit. Although not having to use it everywhere could minimize the impact.
The plus side is you don't have to worry about borrow checking, so such a language can be more flexible than Rust, while still maintaining the memory safety aspect.
The question, as always, is: is the juice worth the squeeze?
Truthfully, I suspect no. The Rust train has left the station and has a decade head start. Even if it is a pain in the ass, lol.
And I admit, I loathe the borrow checker. Ironically, I never really have problems with it, because I do understand it, it's just that I find it too limiting. Not everything I want to do is unsafe, and I hate the way it has made people think that if you really do know better than the borrow checker, you must clearly be doing something wrong and you should re-architect your code. It's insulting.
In the real world, memory safety is not all-or-nothing (unless you're willing to concede that Rust is not safe either, since unsafe Rust exists). I'm working on an embedded project in Rust and I'd MUCH rather be using Zig. The only safety thing that Rust would give me that Zig does not is protection from returning pointers to stack-allocated objects (there are no dynamic allocations and no concurrency outside of extremely simple ISRs that push events onto a statically allocated queue). But in exchange I have to deal with the presence of unsafe Rust, which feels like a gigantic minefield even compared to C.
https://github.com/ityonemo/clr
I think idiomatic coding norms for unsafe Rust are still a bit half-baked, and this is where something like Zig can have an advantage of sorts. You can see this also, e.g. in the ongoing proposals for a Pin<> alternative.
Those are the axes relevant to the parent in the context of their comment - not specific language semantics or core features.
Also, there's FORTH!
> difference between C++ and C is that the former is a large multi-paradigm language, while the latter is a minimalist language. These are completely different axes. > There is no corresponding popular replacement for C that's more minimalist than Rust and memory safe.
Edit: oh, I never read the last bit "and memory safe" -- well ya, that's kind of rust's major advantage.
1. Rust is an immensely complicated language, and it's not very composable (see the async debacle and whatnot). On the simple<->complex slider, it's smack dab on the right of the scale.
2. Ignoring any nitpicking [0], Zig is memory-safe enough in practice, placing it much closer to Rust than to C/C++ on the memory safety axis. My teammates have been using Zig for nearly a year, and the only memory safety bug was (a) caught before prod and (b) not something Rust's features would have prevented [1]. The `defer` and `errdefer` statements are excellent, and much like how you closely audit the use of `unsafe` in Rust there is only a small subset of Zig where you actually need to pull your magnifying glass out to figure out if the code has any major issues. In terms of memory issues I've cared about (not all conforming to Rust's narrow definition of memory safety), I've personally seen many more problems in Rust projects I contribute toward (only the one in Zig, plus a misunderstanding of async as I was learning the language a few years ago, many of varying severity in Rust, at this point probably more code written in Zig than Rust, 10yoe before starting with either).
With that in mind, you have C/C++ on the unsafe axis and Zig/Rust on the safe axis. The complexity axis is self-explanatory, fleshing out the analogy.
Is Zig memory-safe? No, absolutely not. Does that mean that Rust will win out for some domains? Absolutely. In practical terms though, your average senior developer will have many memory safety bugs in C/C++ and few in Zig/Rust. It's a reasonable way to compare and contrast languages.
Is it a perfect description? No, the map is not the territory. It's an analogy that helps a lot of people understand the world around them though.
[0] Even Python is simpler than Rust, and it's memory-safe. If we're limiting ourselves to systems languages, you still have a number of options like Ada and Coq. Rust is popular because it offers a certain tradeoff in the safety/performance/devex Pareto curve, and because it's had a lot of marketing. It's unique in that niche, by definition, but it's far from the only language to offer the features you explicitly stated.
[1] It was just an object pool, and the (aggregate) resetting logic wasn't solid. The objects would have passed through the borrow checker with flying colors though.
Edit: To your GC point, many parts of Rust look closer to GC than not under the hood. You don't have a GC pause, but you have object pools (sometimes falling back to kernel object pools) and a variety of allocation data structures. If RC is a GC tactic, the extra pointer increment/decrement is negligible compared to what Rust actually does to handle its objects (RC is everything Rust does, plus a counter). That's one of my primary performance complaints with the language, that interacting with a churn of small objects is both expensive and the easiest way to code. I can't trust code I see in the wild to behave reasonably by default.
The only time I've seen "almost memory safe" actually work is in Go and Swift, which have memory safety problems with concurrency, but they're actually rare enough not to matter too much (though in Go's case it would have been easy to design interfaces and slices not to have the problem, and I wish they had). I simply don't believe that Zig is meaningfully more memory safe than C++.
Is your opinion based on anything other than pure speculation?
I think it makes more sense to form an opinion after actually having tried Rust, C++, and Zig in earnest.
There are lots of us out there who've done it. Join us!
There's going to be some impact and low-level libraries that manipulate directly the words that constitute slices and interfaces, and there will some slight performance impact and increase in memory usage, but hopefully nothing drastic.
Two-word loads could be used for interface values (assuming that alignment is increased), except if support older x86-64 is needed. There are implementations that require using CMPXCHG16B, which is a really slow way to load two words. Both Intel and AMD updated their ISA manuals that using VMOVDQA etc. is fine (once the CPU supports AVX and the memory is cacheable).
While I use C++ a lot I am not a fan. It is just one of many languages I use. But from my personal experience it is true. I frankly forgot when was the last time I hit memory problem, years for sure. And my code is often a stateful multithreaded backends with high request rate.
The async featureset in Rust is far from complete, but async is also somewhat of a niche. You're not necessarily expected to use it, often you can just use threads.
> Zig is memory-safe enough in practice
Temporal safety is a huge deal, especially for the "programming in the large" case. Where unstated assumptions that are relied upon for the safety of some piece of code can be broken as some other part of the code evolves. The Rust borrow checker is great for surfacing these issues, and has very few practical alternatives.
> If we're limiting ourselves to systems languages, you still have a number of options like Ada and Coq.
It's easy to be "safe" if the equivalent to free() is marked as part of the unsafe subset, as with Ada. Coq is not very relevant on its own, though I suppose it could be part of a solution for proving the memory safety of C code. But this is known to be quite hard unless you do take your care to write the program in the "idiomatically safe" style that a language like Rust points you to.
It is a procedural language and as such, composition of functions is not implemented easily, if at all possible. It is a shame for sure that such powerful techniques are not possible in Rust, but for some people it is worth the trade off.
https://github.com/ityonemo/clr
Sure, but now you snuck in an Artifact of Death (in TvTropes sense) into your codebase. It won't kill your code base immediately and with proper handling it might work, but all it takes is one mistake, one oversight, for it to cause issues.
There is Objective-C, it fits your definition of memory safety.
Rust is a great replacement for C++ as it fits into the same place in the stack of tools.
Go is not a C or Python replacement.
Zig is a good replacement for C.
TinyGo and TamaGo folks enjoy writing their bare metal applications on embedded and firmware.
Google stopped writing new Python, because they found that the same engineers would produce software with similar defect rates, at a similar price and in similar time, with Go, but it would have markedly better performance, so, no more Python.
Go manages to have a nice sharp start, which means there are going to be a bunch of cases where you didn't need all the perf from C, but you did need a prompt start, so, Java was not an option but Go is fine.
In my own practice, Rust entirely replaces C, which has previously been my preferred language for many use cases.
C++ has tons of extra features over C because it's a kitchen sink language. Rust has some extra features over C because in order to support static analysis for memory safety in a practically usable manner, you need those features. In practice, there are lots of C++ features that Rust doesn't bother with, especially around templates. (Rust just has a more general macro feature instead. The C++ folks are now on track to adding a clunky "metaclass" feature which is a lot like Rust macros.)
C++ already has almost Rust macros, with a mix of compile time execution, concepts and traits, without requiring another mini-language, or an external crate (syn).
In what sense? Features or complexity? From a productivity/correctness perspective, Rust is a huge step up over C++.
Use more xcode, Clion, Visual Studio, C++ Builder, and less vi and Emacs for C and C++.
Not everyone has the RIIR luxury for what we do.
https://news.ycombinator.com/item?id=42923829
nerdy langnoob or noobie langnerd here. not sure which is which, cuz my parsing skills are nearly zilch. ;)
I believe Zig even has some libraries for doing more exotic things easily, such as converting between an array of structs and a struct of arrays (and back).
I get it, thanks. in C you have to cast everything to (void *) to do things like a generic vector.
>I believe Zig even has some libraries for doing more exotic things easily, such as converting between an array of structs and a struct of arrays (and back).
yes, iirc, there was a zig thread about this on hn recently.
Zig shatters that with comptime and the 'type' type.
I would disagree. C++ provides way more features than Rust and to me Rust feels way more constrained comparatively.
Many of my favourite Rust features aren't in C++. For example they don't have real Sum types, they don't have the correct move semantic, they are statement oriented rather than expression oriented and their language lacks decent built-in tooling.
But also, some of the things C++ does have are just bad and won't get fixed/ removed because they're popular. And there's no sign of it slowing down.
I think that in practice you should use Rust in most places that C++ is used today, while some of the remainder should be WUFFS, or something entirely different.
No one from them would accept a patch in Rust instead of C or C++.
Yes I know about Deno, but even them haven't rewriten V8, and wgpu or Rust CUDA aren't the same as what Khronos, NVidia, AMD, Microsoft put out in C++.
I am doing fine. Thank you. Also I am not a language warrior. I do have my preferences but use many since I am an independent and work with many clients.
Odin has the best approach for "standard library" by blessing/vendoring immensely useful libraries
Odin also has the best approach for Vector Math with native Vector and Matrix types
Odin's "standard library" stuff is very silly, it still feels like we were just copy-pasted the lead developer's "useful stuff" directory. Bill doesn't feel there's a value to actual standard libraries, so instead here's... whatever. He insists that's not what's going on here, but that doesn't change how it feels.
I suppose there is also Jai in a similar space as well, although I'm not a devotee to Jonathan Blow and I don't share much of the excitement his followers seem to have.
I do feel Zig has the current trend moving in its favor, with projects like Ghostty and Bun gaining prominence. I think Odin would need something like that to really capture attention.
And now they are doubling down on that by moving from "OCaml meets C++" to "C, the good parts"!
If FP isn't good for writing a compiler, what is it good for?
There are a number of new imperative features that have been (or will be) added to the language that capture a lot of the convenience of imperative languages without losing functional guarantees. Richard gave a talk about it here: https://youtu.be/42TUAKhzlRI?feature=shared.
sorry, but why?
https://www.roc-lang.org/faq#rust-and-zig
Summing the Fibonacci sequence I guess.
[1] https://www.roc-lang.org/faq#self-hosted-compiler
I assume that statement will need updating.
Around 2014, I did some experiments with OCaml, and liked it very much
Then I went to do lexing and parsing in OCaml, and my experience was that Python/C++ are actually better for that.
Lexing and parsing are inherently stateful, it's natural to express those algorithms imperatively. I never found parser combinators compelling, and I don't think there are many big / "real" language implementations that uses them, if any. They are probably OK for small languages and DSLs
I use regular expressions as much as possible, so it's more declarative/functional. But you still need imperative logic around them IME [1], even in the lexer, and also in the parser.
---
So yeah I think that functional languages ARE good for writing or at least prototyping compilers -- there are a lots of examples I've seen, and sometimes I'm jealous of the expressiveness
But as far as writing lexers and parsers, they don't seem like an improvement, and are probably a little worse
[1] e.g. lexer modes - https://www.oilshell.org/blog/2017/12/17.html
You get to decide what part of your code should be imperative, and which should be functional.
Rust on the other hand didn’t prioritize compile times and ended up making design decisions that make faster compilation difficult to achieve. To me it’s the biggest pain point with Rust for a large code base and that seems to be the sentiment here as well.
(Source: I wrote much of the Rust compiler.)
Many foundational crates, serde for example, contribute much more to compile times than they need to.
I spent a long time reinventing many foundational rust crates for my game engine, and I proved its possible to attain similar features in a fraction of the compile time, but it’s a losing battle to forgo most of the ecosystem.
I think a significant portion of our pain with rust compile times is self inflicted due to the natural growth of our crate organization and stages.
I still think the rewrite in zig is the right choice for us for various reasons, but I think at least a chunk of our compile times issues are self inflicted (though this happens to any software project that grows organically and is 300k LOC)
On that note, thank you for your part! I sure enjoy your work! :)
But now it's 2025, and the new version still hasn't been enabled on stable Rust, since the Polonius implementation is seen as far too slow for larger programs. (I'm not exactly sure how horrible it really is, I haven't looked at the numbers.) A big goal of the types team since last year has been to reimplement a faster version within rustc [2].
I'd count this as a language feature (albeit a relatively minor one) that's been greatly deferred in favor of shorter compile times.
[0] https://blog.rust-lang.org/inside-rust/2023/10/06/polonius-u...
[1] https://github.com/rust-lang/rust/pull/51133
[2] https://rust-lang.github.io/rust-project-goals/2024h2/Poloni...
Why aren't people designing modern languages to make it easier to keep a stable ABI, rather than giving up entirely?
Not to mention, Richard has a background mostly doing higher level programming. So jumping all the way to something like C or Zig would have been a very big step.
Sometimes you need a stepping stone to learn and figure out what you really want.
Their motivation seems to justify the decision, but I'm pretty sure this will scare away potential contributors. I've been following Roc personally, and probably will continue to do so, but this definitely has killed a lot of interest for me.
One specific thing I like is that the motivation for this rewrite is organic, i.e., not driven by an external pressure campaign. It's refreshing to see a drama-free rewrite.
Almost all parser combinators are recursive descent with backtracking, they just add higher-order plumbing.
I have a feeling that whatever issue they've encountered with the combinator approach could have been worked around by handwriting only a few pieces.
Personally, I find parser combinators nice for small things but painful for large and robust things.
Someone made a joke comment how in a few years there would be tons of HN threads about rewriting Rust to Zig.
I think I'd rather stick with C and run static analysis tools on my code when generating release candidates than have to deal with that.
On top of that, zig has gotten a lot more robust and stable of the last few releases. Upgrading is getting smoother.
But yeah, it is a risk.
There are probably some risks to it. And I think that you wouldn’t want to release a Roc v1 before Zig v1 as well.
But if things are working well now, you could always stay on an older version of Zig while you rewrite things for breaking changes in a newer version.
Still potentially a pain, but Rust is post v1 and they ended up deciding to rewrite anyway, so there are no guarantees in any approach.
[0] : https://www.roc-lang.org/faq#rust-and-zig
It's somewhere in the middle of the list of the "Why now?" section.
Papercuts like this do make or break adoption.
Even C++ is improving on that front.
LLVM improvements will do little to improve Rust, because the major issue is the lack of parallelism and massive amount of LLVM IR that the fronted gives to the backend to handle.
Falsified many times during Rust's development.
Although ClearMake object sharing was quite neat experience back in the day.
Aren't parser combinators just one way to do recursive descent?
I'm part of the core team, and I'm honestly still a bit surprised we're switching to zig. I think it makes sense, it just kinda finally aligned and was decided now. I guess enough technical debt piled up and the core of the language was clearly discovered. So we really know what we want now.
I am a bit sad that Roc is following similar pitfalls as Elm in its quest to be simple by rejecting to have more advanced type system features. That just does not work. In dynamic languages you can opt for minimalism because everything goes by default but in fully statically typed languages things can painful quickly. Even golang had to add generics.
Still many amazing ideas in the language. The platform concept is pretty neat. Not doing implicit currying is a good move.
One thing that makes Go's restrictive type system more bearable is a fantastic stlib package for analyzing and generating Go code. I wonder if Roc will get anything similar.
https://github.com/ziglang/zig/issues/21165
sweet@nadeko ~ $ mkcd test
sweet@nadeko ~/test $ zig init
info: created build.zig
info: created build.zig.zon
info: created src/main.zig
info: created src/root.zig
info: see `zig build --help` for a menu of options
sweet@nadeko ~/test $ time zig build
zig build 5.08s user 0.58s system 119% cpu 4.745 total
# cached
sweet@nadeko ~/test $ time zig build
zig build 0.01s user 0.03s system 132% cpu 0.026 total
# after rewriting the main function to call a function that takes a pointer to another function
sweet@nadeko ~/test $ time zig build
zig build 0.02s user 0.03s system 136% cpu 0.032 total
```
> Roc's compiler has always been written in Rust, and we do not plan to self-host. We have a FAQ entry explaining why, and none of our reasoning has changed on that subject.
So why not something like OCaml that has fast compile times and a lot of nice features for writing compilers? (pattern matching, etc.)
GCed languages can be fast, probably even as fast as any of the aforementioned languages with enough engineering effort, but in terms of high performance, it's actually easier to write high performance code in the aforementioned languages rather than in a GCed language.
This includes the Roc compiler too. Zig is significantly faster than OCaml.
A Roc compiler written in Zig would compile Roc code significantly faster than a Roc compiler written in OCaml.
I’ve been immersed in Swift for a couple years after working as a Go programmer, and I find myself pining for a language that’s more expressive than the latter and more concise than the former.
its build system is also terrible for targeting lots of varying platforms and feature sets.
it has a custom IR that isn't helpful interacting with other systems.
its runtime is really nice but it's a black box and not suitable for interop.