NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Zig; what I think after months of using it (strongly-typed-thoughts.net)
scubbo 3 hours ago [-]
Great write-up, thank you!

I used Zig for (most of) Advent Of Code last year, and while I did get up-to-speed on it faster than I did with Rust the previous year, I think that was just Second (low-level) Language syndrome. Having experienced it, I'm glad that I did (learning how cumbersome memory management is makes me glad that every other language I've used abstracts it away!), but if I had to pick a single low-level language to focus on learning, I'd still pick Rust.

toprerules 2 hours ago [-]
As a systems programmer, Rust has won. It will take decades before there is substantial Rust replacing the absurd amounts of C that runs on any modern Unix system, but I do believe that our of all the replacements for C/C++, Rust has finally gained the traction most of them have lacked at the large companies that put resources behind these types of rewrites and exploratory projects.

I do not think Zig will see wide adoption, but obviously if you enjoy writing it and can make a popular project, more power to you.

tapirl 17 minutes ago [-]
Zig might not become very popular, but IMO, it will become more popular than Rust. Zig is good at all the areas Rust is good at. Zig is also good at game development which Rust is not good at.

And Zig is better when integrating with C/C++ libraries.

anacrolix 8 minutes ago [-]
I agree. It's not ideal but Rust is a genuine improvement across the board on C and C++. It has the inertia and will slowly infiltrate and replace those 2. It also has the rare capacity to add some new areas without detracting from the mainstay: It's actually good as an embedded language for the web and as a DSL. C/C++ definitely didn't have that.
hitekker 7 minutes ago [-]
A (big) former company I worked actually removed the last of its Rust code last year. Maintaining Rust was much more expensive than predicted, and hiring and/or mentoring Rust Engineers proved even more expensive.

A simpler performant language like Zig, or a boring language via a new approach would have been a better choice.

chrisco255 43 minutes ago [-]
Rust has very real limitations and trade-offs. It compiles slow and the binaries are large. The compiler also makes performance sacrifices that makes it generally slower than C. I'm sure the language will continue to be successful, but it hasn't "won".
pkulak 25 minutes ago [-]
Why do you say slower than C? I’ve never seen a reason to believe they’re anything but roughly equivalent.
Narew 7 minutes ago [-]
From my experience on C++ vs Rust on test algorithm. For a naive algorithm implementation rust is usually slightly faster than C++. But when you try to optimise stuff, it's the opposite. It's really hard to optimise Rust code, you need to put lots of unsafe and unsafe is not user-friendly. Rust also force you on some design that are not always good for performance.
Defletter 12 minutes ago [-]
The last I heard, Rust had issues with freeing memory when it wouldn't need to, particularly with short-lived processes (like terminal programs) where the the Rust program would be freeing everything while the C version would just exit out and let the operating system do cleanup.
asa400 2 minutes ago [-]
Rust has ManuallyDrop, which is exactly the functionality you’re describing. It works just fine for those types of programs. The speed of the two is going to be largely dependent on the amount of effort that has gone into optimizing either one, not on some theoretical performance bound. They’re both basically the same there. There are tons of examples of this in the wild at this point.
SPBS 2 hours ago [-]
Headers are missing IDs for URL fragments to jump to e.g. https://strongly-typed-thoughts.net/blog/zig-2025#error-hand... doesn't work
3r7j6qzi9jvnve 3 hours ago [-]
(never used zig yet myself) For UB detection I've read zig had prime support for sanitizers, so you could run your tests with ubsan and catch UBs at this point... Assuming there are enough tests.

As far as I'm concerned (doing half C / half rust) I'm still watching from the sidelines but I'll definitely give zig a try at some point. This article was insightful, thank you!

sedatk 2 hours ago [-]
> The first one that comes to mind is its arbitrary-sized integers. That sounds weird at first, but yes, you can have the regular u8, u16, u32 etc., but also u3. At first it might sound like dark magic, but it makes sense with a good example that is actually a defect in Rust to me.

You don't need Rust to support that because it can be implemented externally. For example, crates like "bitbybit" and "arbitrary-int" provide that functionality, and more:

https://docs.rs/crate/arbitrary-int/

https://docs.rs/crate/bitbybit/

pcwalton 50 minutes ago [-]
I'm normally not sympathetic to the "you don't need that" argument, but there is a much stronger argument for not having arbitrarily-sized integers in Rust: the fact that values of such types can't have an address. The reason why our types all have bit sizes measured in octets is that a byte is the minimum granularity for a pointer.
chrisco255 26 minutes ago [-]
A byte isn't the minimum granularity for a pointer. The minimum is based on whatever target you're compiling for. If it's a 32-bit target platform, then the minimum granularity is 4 bytes. Why should pointer size determine value size though? It's super fast to shift bits around, too, when needed.
pcwalton 22 minutes ago [-]
> If it's a 32-bit target platform, then the minimum granularity is 4 bytes.

Huh? How do you think `const char *s = "Hello"; const char *t = &s[1];` works?

> Why should pointer size determine value size though?

Because you should be able to take the address of any value, and addresses have byte granularity.

cwood-sdf 3 hours ago [-]
It seems like he wants zig to be more like rust. personally, i like that zig is so simple
zamalek 3 hours ago [-]
This is absolutely not what the article is about. A good majority of it is spent on the myth that Zig is safer than Rust, which has nothing to do with wishing Zig was more like Rust.
chrisco255 2 hours ago [-]
Is there a myth that makes that claim? Virtually every take I've heard is that Zig is "safe enough" while giving developers more control over memory and actually, it's specifically better for cases where you must write unsafe code, as it's not possible to express all programs in safe Rust.
bobbylarrybobby 2 hours ago [-]
If you must write unsafe code, what's wrong with just dropping down to unsafe in Rust when you need to? You have all the power unsafe provides, and you have a smaller surface area to audit than if your entire codebase resides in one big unsafe block.
ulbu 1 hours ago [-]
the barrier between unsafe and safe has additional rules. it’s not “just dropping to unsafe” – you need to make sure you leave it safely.
chrisco255 2 hours ago [-]
Unsafe Rust is problematic: https://zackoverflow.dev/writing/unsafe-rust-vs-zig See also: https://github.com/roc-lang/roc/blob/main/www/content/faq.md...

Zig is not entirely unsafe. It provides quite a few compile time checks and primitives to catch memory leaks or prevent them altogether.

pcwalton 32 minutes ago [-]
> It provides quite a few compile time checks and primitives to catch memory leaks or prevent them altogether.

From what I've seen, clang has all of these and more for C++. If your metric is "tooling to help you catch UB", C++ is significantly superior to Zig.

chrisco255 21 minutes ago [-]
Zig has a C and C++ compiler built into it and works seamlessly with it. Several C/C++ projects use Zig as a build tool. Zig makes different trade-offs with C++ from a language design standpoint. C++ has a lot more footguns to create UB in the first place.
pcwalton 19 minutes ago [-]
> C++ has a lot more footguns to create UB in the first place.

I'd actually give the edge to C++ over Zig, because of smart pointers (not that I'm implying smart pointers are anywhere near sufficient).

zamalek 1 hours ago [-]
This is precisely the myth that the article talks about. Miri finds significantly more UB in unsafe Rust than Zig's checks do.

Even if it weren't, this exaggeration is a complete theater. You aren't supposed to use unsafe Rust unless you really have to. I have been using Rust since 2020 and I've used it once, for 3 lines of code. The entirety of all Zig codebases is unsafe. That's fine if you are fine with unsafe code, but this myth is dishonest, and I take great issue with using a language where the founder is the primary source of the dishonesty - because what else is being swept under the rug?

chrisco255 1 minutes ago [-]
> Miri finds significantly more UB in unsafe Rust than Zig's checks do.

That's not a substantiated claim. Miri also runs very slowly.

> You aren't supposed to use unsafe Rust unless you really have to. I have been using Rust since 2020 and I've used it once, for 3 lines of code.

Cool, glad you haven't needed it. If you're ever writing interpreters or interfacing with external code, you'll need it.

> The entirety of all Zig codebases is unsafe

Zig is not 100% memory safe but it has compile-time safety for vast majority of footguns developers get themselves into with C/C++. Meanwhile, Rust's safety overhead has real trade-offs in terms of developer productivity, computational performance, compiler performance and binary size.

throwawaymaths 1 hours ago [-]
yes but by the time you're using miri, why not just run zig with a separate static checker that does all the memory safety parts?

https://github.com/ityonemo/clr

pcwalton 42 minutes ago [-]
For one, it doesn't do all the "memory safety parts", according to the readme. I'm very skeptical that Zig can be made memory safe with a checker while still remaining compatible with existing code. Certainly neither C nor C++ can, and Zig isn't meaningfully different in expressivity (if anything, it's more expressive, which is the opposite of what you want).
throwawaymaths 10 minutes ago [-]
FTrepo:

Q: You didn't do X, so Zig will never be able to track X

A: Maybe. Only way to know for sure is to fork this (or, hopefully, a 'real' successor) and fail. However, consider that "trivially" it should be possible to externally annotate every zig file with lifetime/type annotations identical to that of Rust and run "exactly the same" analysis as Rust and get the same memory safety as Rust.

it appears the clr author anticipated you: you didnt fork it, try, and fail, so you have ceded the authority to credibly make your speculative complaint

> Zig isn't meaningfully different in expressivity

it is meaningfully different in expressivity at the AIR level.

oneshtein 1 hours ago [-]
Yes, unsafe code is problematic in Rust, C, C++, etc. Is Zig different?
mk12 1 hours ago [-]
It’s harder to write correct unsafe Rust than correct Zig because (1) Rust uses references all over the place, but when writing unsafe code you must scrupulously avoid “producing” an invalid reference (even if you never deference it), and (2) there’s lots of syntax noise which obscures what the code is doing (though &raw is a step in the right direction).
ulbu 1 hours ago [-]
i haven’t seen anyone pronounce it anywhere once.
grayhatter 23 minutes ago [-]
This is 100% why the article was written. The author spends a LOT of time trying to convince others the way rust does $anything is better.
edflsafoiewq 3 hours ago [-]
The debate between static and dynamic typing continues unceasingly. Even when the runtime values are statically typed, it's merely reprised at the type level.
smt88 3 hours ago [-]
The debate seems to have mostly ended in a victory for static types.

The largest languages other than Python have them (if you include the transition from JS to TS). Python is slowly moving toward having them too.

Turskarama 2 hours ago [-]
I honestly don't see how anyone who has used a language with both unions and interfaces could come up with anything else that makes dynamic types better.

Either way you need to fulfill the contract, but I'd much prefer to find out I failed to do that at compile time.

ridiculous_fish 2 hours ago [-]
Don't confuse "presence of dynamic types" with "absence of static types."

Think about the web, which is full of dynamicism: install this polyfill if needed, call this function if it exists, all sorts of progressive enhancement. Dynamic types are what make those possible.

Turskarama 1 hours ago [-]
Sure, I'm primarily a C# programmer which does have a dynamic type object, and occasionally use VB which uses late binding and can use dynamic typing as well.

You want to know how often I find dynamic typing the correct tool for the job? It's literally never.

Dynamic typing does allow you to do things faster as long as you can keep the whole type system in your head, which is why JavaScript was designed the way it was. That doesn't mean it is necessary to do any of those things, or is even the best way to do it.

adgjlsfhk1 53 minutes ago [-]
the place where imo static languages come up short is first class functions.
edflsafoiewq 2 hours ago [-]
The whole anytype/trait question is just dynamic typing, but at the type level instead of the value level.
2 hours ago [-]
patrick451 2 hours ago [-]
If I'm told to still use === in typescript, it's not actually a statically typed language.
lnenad 3 hours ago [-]
When did shadowing become a feature? I was under the impression it's an anti-pattern. As per the example in the article

> const foo = Foo.init(); > const foo2 = try foo.addFeatureA(); > const foo3 = try foo.addFeatureB();

It's a non issue to name vars in a descriptive way referring to the features initial_foo for example and then foo_feature_a. Or name them based on what they don't have and then name it foo. In the example he provided for Rust, vars in different scopes isn't really an example of shadowing imho and is a different concept with different utility and safety. Replacing the value of one variable constantly throughout the code could lead to unpredictable bugs.

lolinder 2 hours ago [-]
> Replacing the value of one variable constantly throughout the code could lead to unpredictable bugs.

Having variables with scopes that last longer than they're actually used and with names that are overly long and verbose leads to unpredictable bugs, too, when people misuse the variables in the wrong context later.

When I have `initial_foo`, `foo_feature_a`, and `foo_feature_b`, I have to read the entire code carefully to be sure that I'm using the right `foo` variant in subsequent code. If I later need to drop Feature B, I have to modify subsequent usages to point back to `foo_feature_a`. Worse, if I need to add another step to the process—a Feature C—I have to find every subsequent use and replace it with a new `foo_feature_c`. And every time I'm modifying the code later, I have to constantly sanity check that I'm not letting autocomplete give me the wrong foo!

Shadowing allows me to correctly communicate that there is only one `foo` worth thinking about, it just evolves over time. It simulates mutability while retaining all the most important benefits of immutability, and in many cases that's exactly what you're actually modeling—one object that changes from line to line.

mk12 35 minutes ago [-]
It’s a trade-off.

If you allow shadowing, then you rule out the possibility of the value being used later. This prevents accidental use (later on, in a location you didn't intend to use it) and helps readability by reducing the number of variables you must keep track of at once.

If you ban shadowing, then you rule out the possibility of the same name referring to different things in the same scope. This prevents accidental use (of the wrong value, because you were confused about which one the name referred to) and helps readability by making it easier to immediately tell what names refer to.

pkulak 22 minutes ago [-]
And on the whole, I prefer shadowing. I’ve never had a bug in either direction, but keeping everything immutable without shadowing means you spend all your brain power Naming Things.
lnenad 2 hours ago [-]
> When I have `initial_foo`, `foo_feature_a`, and `foo_feature_b`, I have to read the entire code carefully to be sure that I'm using the right `foo` variant in subsequent code. If I later need to drop Feature B, I have to modify subsequent usages to point back to `foo_feature_a`. Worse, if I need to add another step to the process—a Feature C—I have to find every subsequent use and replace it with a new `foo_feature_c`. And every time I'm modifying the code later, I have to constantly sanity check that I'm not letting autocomplete give me the wrong foo!

When you have only one `foo` that is mutated throughout the code you are forced to organize the processes in your code (validation, business logic) based on the current state of that variable. If your variables have values which are logically assigned you're not bound by the current state of that variable. I think this a big pro. The only downside most people disagreeing with me are mentioning is related to ergonomics of it being more convenient.

jay_kyburz 40 minutes ago [-]
I don't know zig at all, but why is the author trying to declare foo as const 3 times. Surly you would declare it as var with some default value that means uninitialized, then try and put values in it.
saithound 3 hours ago [-]
Shadowing always has been a feature, doubly so in languages which lack linear types.

It is a promise to the reader (and compiler) that I will have no need of the old value again.

Notice that applying the naming convention you suggest does nothing to prevent the bug in the code you quoted. It might be just as easy to write

const initial_foo = Foo.init(); > const foo_feature_A = try initial_foo.addFeatureA(); > const foo_feature_B = try initial_foo.addFeatureB();

but it's also just as wrong. And even if you get it right, when the code changes later, somebody may add const foo_feature_Z = try foo_feature_V.addFeatureX();. Shadowing prevents this.

nine_k 3 hours ago [-]
Said promise should also be checked for sanity. E.g.

  for i in range(N) {
    for i in range(M) {
      # Typo; wanted j.
      # The compiler should complain.
    }
  }
Maxatar 3 hours ago [-]
The Rust compiler would complain in this case that the initial i variable is unused. Unused variables should be named with an underscore, _.
dpc_01234 3 hours ago [-]
Shadowing is a feature. It's very common that given value transforms its shape and previous versions become irrelevant. Keeping old versions under different names would be just confusing. With type system there is no room for accidental misuse. I write Rust professionally for > 2 years, and years before that I was using it my own projects. I don't think shadowing ever backfired on me, while being very ergonomic.
lnenad 2 hours ago [-]
Depending on which language you are using shadowing could lead to either small issues or catastrophic ones (in the scope of the program). If you have Python and you start with a number but end up with a complex dict this is very different than having one value in Rust and a slightly different value which is enforced by the compiler.
Maxatar 3 hours ago [-]
Don't see how it could introduce bugs. The point of replacing a variable is precisely to make a value that is no longer needed inaccessible. If anything introducing new variables with new names has the potential to introduce subtle bugs since someone could mistakenly use one of the variables that is no longer valid or no longer needed.
sjburt 2 hours ago [-]
When you are modifying a long closure and don’t notice that you are shadowing a variable that is used later.

I know “use shorter functions” but tell that to my coworkers.

zamalek 3 hours ago [-]
The example given isn't that great. Here's a significantly more common one:

    var age = get_string_from_somewhere();
    var age = parse_to_int(age);
Without same-scope shadowing you end up with the obnoxious:

    var age_string = get_string_from_somewhere();
    var age = parse_to_int(age_string);
Note that your current language probably does allow shadowing: in nested scopes (closures).
chrisco255 2 hours ago [-]
Changing the type on a value is an anti-pattern, in my opinion. It's not obnoxious to be explicit in your variable names.
zamalek 1 hours ago [-]
That implies that Hungarian notation is not obnoxious? Sure, that's a fine opinion to have, but I guarantee it is an exceedingly rare one.
physicles 2 hours ago [-]
Over the years, I’ve wasted 1-2 days of my life debugging bugs caused by unintentional variable shadowing in Go (yes, I’ve kept track). Often, the bug is caused by an accidental use of := instead of =. I don’t understand why code that relies on shadowing isn’t harder to follow. Wish I could disable it entirely.
lolinder 2 hours ago [-]
> Often, the bug is caused by an accidental use of := instead of =.

This is a distinctly Go problem, not a problem with shadowing as a concept. In Rust you'd have to accidentally add a whole `let` keyword, which is a lot harder to do or to miss when you're scanning through a block.

There are lots of good explanations in this subthread for why shadowing as a concept is great. It sounds like Go's syntax choices make it bad there.

lnenad 2 hours ago [-]
> There are lots of good explanations in this subthread for why shadowing as a concept is great

Not really. All of them boil down to ergonomics, when in reality it doesn't bring a lot of benefit other than people hating on more descriptive variable names (which is fair).

pcwalton 2 hours ago [-]
You can (assuming you're talking about Rust)! Just use Clippy and add #[deny(clippy::shadow_reuse)]: https://rust-lang.github.io/rust-clippy/master/#shadow_reuse

My position on shadowing is that it's a thing where different projects can have different opinions, and that's fine. There are good arguments for allowing shadowing, and there are good arguments for disallowing it.

mk12 17 minutes ago [-]
This is another big difference between Rust and Zig. Rust lets you have it both ways with configuration. Zig places much more value on being able to read and understand any Zig code in the wild, based only on “it compiles”. Rust’s “it compiles” gives you lots of information about safety (modulo unsafe blocks), but very little about certain other things until you’ve examined the 4-5 places which might be tweaking configuration (#[attributes], various toml files, environment variables, command line flags).
antonvs 3 hours ago [-]
It’s been a feature in languages for at least half a century. Scheme’s lexical scoping supported it in 1975, and Lisp adopted that.
lnenad 2 hours ago [-]
Yeah, it's a feature of a language, doesn't mean we are forced to use it.
hoelle 2 hours ago [-]
> Zig does enhance on C, there is no doubt. I would rather write Zig than C. The design is better, more modern, and the language is safer. But why stop half way? Why fix some problems and ignore the most damaging ones?

I was disappointed when Rust went 1.0. It appeared to be on a good track to dethroning C++ in the domain I work in (video games)... but they locked it a while before figuring out the ergonomics to make it workable for larger teams.

Any language that imbues the entire set of special characters (!#*&<>[]{}(); ...etc) with mystical semantic context is, imo, more interested in making its arcane practitioners feel smart rather than getting good work done.

> I don’t think that simplicity is a good vector of reliable software.

No, but simplicity is often a property of readable, team-scalable, popular, and productive programming languages. C, Python, Go, JavaScript...

Solving for reliability is ultimately up to your top engineers. Rust certainly keeps the barbarians from making a mess in your ivory tower. Because you're paralyzing anyone less technical by choosing it.

> I think my adventure with Zig stops here.

This article is a great critique. I share some concerns about the BDFL's attitudes about input. I remain optimistic that Zig is a long way from 1.0 and am hoping that when Andrew accomplishes his shorter-term goals, maybe he'll have more brain space for addressing some feedback constructively.

pcwalton 1 hours ago [-]
> It appeared to be on a good track to dethroning C++ in the domain I work in (video games)... but they locked it a while before figuring out the ergonomics to make it workable for larger teams.

There are million-line Rust projects now. Rust is obviously workable for larger teams.

> Any language that imbues the entire set of special characters (!#*&<>[]{}(); ...etc) with mystical semantic context is, imo, more interested in making its arcane practitioners feel smart rather than getting good work done.

C uses every one of those symbols.

I think you're talking about @ and ~ boxes. As I recall, those were removed the same year the iPad and Instagram debuted.

hoelle 42 minutes ago [-]
> I think you're talking about @ and ~ boxes. As I recall, those were removed the same year the iPad and Instagram debuted.

Take criticism better.

A language choice on a project means the veterans are indefinitely charged with teaching it to newbies. For all Rust's perks, I judge that it would be a time suck for this reason.

Browsing some random rust game code: [https://github.com/bevyengine/bevy/blob/8c7f1b34d3fa52c007b2...] pub fn play<'p>( &mut self, player: &'p mut AnimationPlayer, new_animation: AnimationNodeIndex, transition_duration: Duration, ) -> &'p mut ActiveAnimation {

[https://github.com/bevyengine/bevy/blob/8c7f1b34d3fa52c007b2...] #[derive(Debug, Clone, Resource)] #[cfg_attr(feature = "bevy_reflect", derive(Reflect), reflect(Default, Resource))] pub struct ButtonInput<T: Copy + Eq + Hash + Send + Sync + 'static> { /// A collection of every button that is currently being pressed. pressed: HashSet<T>, ...

Cool. Too many symbols.

pcwalton 39 minutes ago [-]
That first "random Rust game code" is in fact code I wrote :)

It's the same amount of punctuation as C++, or really any other language with C-like syntax.

grayhatter 1 hours ago [-]
lol, I knew exactly who wrote this once I saw the complaint about shadowing being forbidden. The author and I were just arguing about it the other day on irc. While the author considers it an annoying language bug because it requires creating additional variable names (given refactoring was an unpalatable option). I consider it a feature.

Said arguments have become a recurring and frustrating refrain; when rust imposes some limit or restriction on how code is written, it's a good thing. But if Zig does, it's a problem?

The remainder of the points are quite hollow, far be it from me to complain when someone starts with a conclusion and works their way backwards into an argument... but here I'd have hoped for more content. The duck typing argument is based on minimal, or missing documentation, or the doc generator losing parts of the docs. And "comptime is probably not as interesting as it looks" the fact he calls it probably uninteresting highlights the lack of critical examination put here. comptime is an amazing feature, and enables a lot of impressive idioms that I enjoy writing.

> I’m also fed up of the skill issue culture. If Zig requires programmers to be flawless, well, I’m probably not a good fit for the role.

But hey, my joke was featured as the closing thought! Zig doesn't require one to be flawless. But it' also doesn't try to limit you, or box you into a narrow set of allowed operations. There is the risk that you write code that will crash. But having seen more code with unwrap() or expect() than without, I don't think that's the bar. The difference being I personally enjoy writing Zig code because zig tries to help you write code instead of preventing you from writing code. With that does come the need to learn and understand how the code works. Everything is a learnable skill; and I disagree with the author it's too hard to learn. I don't even think it's too hard for him, he's just appears unwilling.... and well he already made up his mind about which language is his favorite.

taurknaut 3 hours ago [-]
I loved this deep-dive of zig.

> There’s a catch, though. Unlike Rust, ErrorType is global to your whole program, and is nominally typed.

What does "global to your whole program" mean? I'd expect types to be available to the whole compilation unit. I'm also weirded out by the fact that zig has a distinct error type. Why? Why not represent errors as normal records?

hansvm 50 minutes ago [-]
> global to your whole program

Zig automatically does what most languages call LTO, so "whole program" and "compilation unit" are effectively the same thing (these error indices don't propagate across, e.g., dynamically linked libraries). If you have a bunch of ZIg code calling other Zig code and using error types, they'll all resolve to the same global error type (and calling different code would likely result in a different global error type).

> distinct error type, why?

The langage is very against various kinds of hidden "magic." If you take for granted that (1) error paths should have language support for being easily written correctly, and (2) userspace shouldn't be able to do too many shenanigans with control flow, then a design that makes errors special is a reasonable result.

It also adds some homogeneity to the code you read. I don't have to go read how _your_ `Result` type works just to use it correctly in an async context.

The obvious downside is that your use case might not map well to the language's blessed error type. In that case, you just make a normal record type to carry the information you want.

jamii 2 hours ago [-]
What they're trying to convey is that errors are structurally typed. If you declare:

    const MyError = error{Foo}
in one library and:

    const TheirError = error{Foo}
in another library, these types are considered equal. Unlike structs/unions/enums which are nominal in zig, like most languages.

The reason for this, and the reason that errors are not regular records, is to allow type inference to union and subtract error types like in https://news.ycombinator.com/item?id=42943942. (They behave like ocamls polymorphic variants - https://ocaml.org/manual/5.3/polyvariant.html) This largely avoids the problems described in https://sled.rs/errors.html#why-does-this-matter.

On the other hand zig errors can't have any associated value (https://github.com/ziglang/zig/issues/2647). I often find this requires me to store those values in some other big sum type somewhere which leads to all the same problems/boilerplate that the special error type should have saved me from.

throwawaymaths 1 hours ago [-]
if you need values associated with your error you can stash them in an in-out parameter
jamii 20 minutes ago [-]
If I have multiple errors then that in-out parameter has to be a union(enum). And then I'm back to creating dozens of slightly different unions for functions which return slightly different sets of errors. Which is the same problem I have in rust. All of the nice inference that zig does doesn't apply to my in-out parameter either. And the compiler won't check that every path that returns error.Foo always initializes error_info.Foo.
lmm 3 hours ago [-]
> What does "global to your whole program" mean? I'd expect types to be available to the whole compilation unit.

I think they mean you only have one global/shared ErrorType . You can't write the type of function that may yeet one particular, specific type of error but not any other types of error.

chrisco255 3 hours ago [-]
They're really just enum variants. You can easily capture the error and conditionally handle it:

fn failFn() error{Oops}!i32 { try failingFunction(); return 12; }

test "try" { const v = failFn() catch |err| { try expect(err == error.Oops); return; }; try expect(v == 12); // is never reached }

lmm 2 hours ago [-]
> You can easily capture the error and conditionally handle it

Sure. But the compiler won't help you check that your function only throws the errors that you think it does, or that your try block is handling all the errors that can be thrown inside it.

jamii 2 hours ago [-]
> ...the compiler won't help you check that your function only throws the errors that you think it does, or that your try block is handling all the errors that can be thrown inside it.

It will do both of those:

    const std = @import("std");

    fn throws(i: usize) !void {
        return switch (i) {
            0 => error.zero,
            1 => error.one,
            else => error.many,
        };
    }

    fn catches(i: usize) !void {
        throws(i) catch |err| {
            return switch (err) {
                error.one => error.uno,
                else => |other| other,
            };
        };
    }

    pub fn main() void {
        catches(std.os.argv.len) catch |err| {
            switch (err) {
                // Type error if you comment out any of these:
                // note: unhandled error value: 'error.zero'
                error.zero => std.debug.print("0\n", .{}),
                error.uno => std.debug.print("1\n", .{}),
                error.many => std.debug.print("2\n", .{}),
                // Type error if you uncomment this:
                // 'error.one' not a member of destination error set
                //error.one => std.debug.print("1\n", .{}),
            }
        };
    }
It wouldn't hurt to just read the docs before making confident claims.
naasking 3 hours ago [-]
I'm not speaking for Zig, but in principle errors are not values, and often have different control flow and sometimes even data flow constraints.
valenterry 3 hours ago [-]
Can you elaborate more?
chrisco255 3 hours ago [-]
valenterry 2 hours ago [-]
There it says "errors are values" so now that contradicts what OP said.
naasking 1 hours ago [-]
I said I wasn't speaking for Zig specifically, just on general principle that errors are not really values. Many languages reify errors as values to avoid having different semantics for errors, but errors probably should have their own semantics. Zig seems to take a middle ground here, where errors are a special type of value but that still sort of has its own semantics.
valenterry 13 minutes ago [-]
> I said I wasn't speaking for Zig specifically

Lol, you are right. My brain just skipped that part somehow.

chrisco255 2 hours ago [-]
I was generally responding to the whole thread and pointing to how Zig sees errors. Enums are a type of value, yes, but they're typically dealt with differently than other data types.
frangfarang 1 hours ago [-]
[dead]
ethin 3 hours ago [-]
No idea how much the author is experienced at Zig, but my thoughts:

> No typeclasses / traits

This is purposeful. Zig is not trying to be some OOP/Haskell replacement. C doesn't have traits/typeclasses either. Zig prefers explicitness over implicit hacks, and typeclasses/traits are, internally, virtual classes with a vtable pointer. Zig just exposes this to you.

> No encapsulation

This appears to be more a documentation issue than anything else. Zig does have significant issues in that area, but this is to be expected in a language that hasn't even hit 1.0.

> No destructors

Uh... What? Zig does have destructors, in a way. It's called defer and errordefer. Again, it just makes you do it explicitly and doesn't hide it from you.

> No (unicode) strings

People seem to want features like this a lot -- some kind of string type. The problem is that there is no actual "string" type in a computer. It's just bytes. Furthermore, if you have a "Unicode string" type or just a "string" type, how do you define a character? Is it a single codepoint? Is it the number of codepoints that make up a character as per the Unicode standard (and if so, how would you even figure that out)? For example, take a multi-codepoint emoji. In pretty much every "Unicode string" library/language type I've seen, each individual codepoint is a "character". Which means that if you come across a multi-codepoint emoji, those "characters" will just be the individual codepoints that comprise the emoji, not the emoji as a whole. Zig avoids this problem by just... Not having a string type, because we don't live in the age of ASCII anymore, we live in a Unicode world. And Unicode is unsurprisingly extremely complicated. The author tries to argue that just iterating over byes leads to data corruption and such, but I would argue that having a Unicode string type, separate from all other types, designed to iterate over some nebulous "character" type, would just introduce all kinds of other problems that, I think, many would agree should NOT be the responsibility of the language. I've heard this criticism from many others who are new to zig, and although I understand the reasoning behind it, the reasoning behind just avoiding the problem entirely is also very sensible in my mind. Primarily because if Zig did have a full Unicode string and some "character" type, now it'd be on the standard library devs to not only define what a "character" is, and then we risk having something like the C++ Unicode situation where you have a char32_t type, but the standard library isn't equipped to handle that type, and then you run into "Oh this encoding is broken" and on and on and on it goes.

pcwalton 2 hours ago [-]
> typeclasses/traits are, internally, virtual classes with a vtable pointer

No, they're not. Rust "boxed traits" are, but those aren't what the author means.

> Primarily because if Zig did have a full Unicode string and some "character" type, now it'd be on the standard library devs to not only define what a "character" is, and then we risk having something like the C++ Unicode situation where you have a char32_t type, but the standard library isn't equipped to handle that type, and then you run into "Oh this encoding is broken" and on and on and on it goes.

The standard library not being equipped to handle Unicode is the entire problem. Not solving it doesn't avoid the issue: it just makes Unicode safety the programmer's responsibility, increasing the complexity of the problem domain for the programmer and leaving more room for error.

accelbred 1 hours ago [-]
Not being able to easily write a program without Unicode being pulled in for Rust code was a reason I'd chosen C over Rust before. When targeting binary sizes measured in kilobytes, pulling in full unicode handling is not an option. Especially since programs that don't have direct human interaction rarely actually need unicode.
metaltyphoon 2 hours ago [-]
> The standard library not being equipped to handle Unicode is the entire problem

Zig: I want to be a safer C

C: I don't have string type

Zig: No… not like that!

throwawaymaths 1 hours ago [-]
> The standard library not being equipped to handle Unicode is the entire problem.

what? unicode is in the standard library.

https://github.com/ziglang/zig/blob/master/lib/std/unicode.z...

llimllib 3 hours ago [-]
> In pretty much every "Unicode string" library/language type I've seen, each individual codepoint is a "character"

languages are actually really inconsistent on what they count as a unicode character: https://hsivonen.fi/string-length/

(I don't broadly disagree with you on unicode support, just linking an article relevant to that claim)

mbb70 3 hours ago [-]
There is no nebulous 'character' type. There are bytes, codepoints and glyphs. All languages with Unicode support allow iterating over each for a given string.
edflsafoiewq 2 hours ago [-]
> Zig does have destructors, in a way. It's called defer and errordefer.

defer ties some code to a static scope. Destructors are tied to object lifetime, which can be dynamic. For example, if you want to remove some elements from an ArrayList of, say, strings, the string's would need to be freed first. defer does not help you, but destructors would.

chrisco255 3 hours ago [-]
For me not having strings in Zig and being forced to use the fairly verbose '[]const u8' syntax every time I need a string was a little annoying at first, but it has had the effect of making me comfortable with the idea of buffers in a general sense, which is critical in systems programming. Most of the things that irked me about Zig when first learning it (I'm only a few weeks into it) have grown on me.
2 hours ago [-]
nynx 3 hours ago [-]
Typeclasses are conceptual interfaces. They don’t have anything to do with vtables.
caspper69 2 hours ago [-]
Having just gone down this road in C#, the way Unicode is now handled is via "runes".

Each rune may be comprised of various Unicode characters, which may themselves be 1-4 bytes (in the case of utf-8 encoding).

The one problem I have with this approach is that all of the categorization features operate a level below the runes, so you still have to break them up. The biggest drawback is that, at least in my (admittedly limited) research, there is no such thing as a "base" character in certain runes (such as family emojis- parents with kids). You can mostly dance around it with the vast majority of runes, because one character will clearly be the base character and one (or more) will clearly be overalys, but it's not universal.

silisili 2 hours ago [-]
Go does this too. I generally like the idea a lot, as long as it's consistent. The one thing I don't like is the inconsistency.

Not sure about C#, but in Go for example ranging strings ranges over runes, but indexing pulls a single byte. And len is the byte length rather than rune length.

So basically it's a byte array everywhere except ranging. I guess I would have preferred an explicit cast or conversion to do that instead of by default.

stonogo 2 hours ago [-]
Runes are how UTF-8 has been handled since its invention. It's just taken some platforms longer to get there than others.
wtetzner 3 hours ago [-]
I don't necessarily disagree with not having a string type in a low level language, but you seem very fixated on needing a character type. Why not just have string be an opaque type, and have functions to iterate over code points, grapheme clusters, etc.?
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 06:56:04 GMT+0000 (Coordinated Universal Time) with Vercel.