NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
APT Rust requirement raises questions (lwn.net)
geerlingguy 21 hours ago [-]
I remembered reading about this news back when that first message was posted on the mailing list, and didn't think much of it then (rust has been worming its way into a lot of places over the past few years, just one more thing I tack on for some automation)...

But seeing the maintainer works for Canonical, it seems like the tail (Ubuntu) keeps trying to wag the dog (Debian ecosystem) without much regard for the wider non-Ubuntu community.

I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.

As an end user, it doesn't concern me too much, but someone choosing to add a new dependency chain to critical software plumbing does, at least slightly, if not done for very good reason.

razighter777 20 hours ago [-]
Agreed. I think that announcement was unprofessional.

This was a unilateral decision affecting other's hard work, and the author didn't provide them the opportunity to provide feedback on the change.

It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.

This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable.

There's no clear cost-benefit analysis done for this change. Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.

I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.

travisgriggs 19 hours ago [-]
> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.

Thanks for this.

I know intellectually, that there are sane/pragmatic people who appreciate Rust.

But often the vibe I’ve gotten is the evangelism, the clear “I’ve found a tribe to be part of and it makes me feel special”.

So it helps when the reasonable signal breaks through the noisy minority.

razighter777 17 hours ago [-]
Most of us sane people tend to be more quiet unfortunately.

I enjoy rust, but I enjoy not breaking things for users and making lives harder for other devs even more.

vablings 19 hours ago [-]
>I know intellectually, that there are sane/pragmatic people who appreciate Rust.

For the most part that is almost everyone who works on rust and writes rust. The whole coreutils saga was pretty much entirely caused by Canonical, The coreutils rewrite project was originally a hobby project iirc and NOT ready for prod.

for the most part the coreutils rewrite is going well all things considered, bugs are fixed quickly and performance will probably exceed the original implementation in some cases since concurrency is a cake-walk.

The whole re-write it in rust largely stemmed from the idea that if you have a program in C and a program in Rust then the program in rust is "automatically" better which is often the case. The exception is very large battle tested projects with custom tooling in place to ensure the issues that make C/C++ a nightmare are somewhat reduced. Rust ships with the borrow checker by default meaning logically its like for like.

In the real world it is not always the case there are still plenty of opportunity for straight up logic bugs and crashes (See cloudflare saga) that are completely just due to bad programming practices.

Rust is the nail and the hammer, but you can still hit your finger if you don't know how to swing it properly

FYI for the purpose of disclosing bias I am one of the few "rust first" developers. I learned the language in 2021, and it was the first "real" programming language I learned how to use effectively. Any attempts I have had to dive into other languages have been short lived and incredibly frustrating because rust is a first-class experience in how to make a systems programming language

influx 19 hours ago [-]
It really makes me upset that we are throwing away decades of battle tested code just because some people are excited about the language du jour. Between the systemd folks and the rust folks, it may be time for me to move to *BSD instead of Linux. Unfortunately, I'm very tied to Docker.
stouset 15 hours ago [-]
That “battle-tested code” is often still an enduring and ongoing source of bugs. Maintainers have to deal with the burden of working in a 20+ year-old code base with design and architecture choices that probably weren’t even a great idea back then.

Very few people are forcing “rewrite in rust” down anyone’s throats. Sometimes it’s the maintainers themselves who are trying to be forward-thinking and undertake a rewrite (e.g., fish shell), sometimes people are taking existing projects and porting them just to scratch an itch and it’s others’ decisions to start shipping it (e.g., coreutils). I genuinely fail to see the problem with either approach.

C’s long reign is coming to an end. Some projects and tools are going to want to be ahead of the curve, some projects are going to be behind the curve. There is no perfect rate at which this happens, but “it’s battle-tested” is not a reason to keep a project on C indefinitely. If you don’t think {pet project you care about} should be in C in 50 years, there will be a moment where people rewrite it. It will be immature and not as feature-complete right out the gate. There will be new bugs. Maybe it happens today, maybe it’s 40 years from now. But the “it’s battle tested, what’s the rush” argument can and will be used reflexively against both of those timelines.

vablings 8 hours ago [-]
I agree that throwing away battle tested code is wasteful and often not required. Most people are not of the mindset of just throwing things away but there is a drive to make things better. There are some absolute monoliths such as the Linux kernel that will likely never break free of its C shackles and thats completely okay and acceptable to me
johnmaguire 18 hours ago [-]
systemd has been the de facto standard for over a decade now and is very stable. I have found that even most people who complained about the initial transition are very welcoming of its benefits now.
phicoh 16 hours ago [-]
Depends a bit on how you define systemd. Just found out that the systemd developers don't understand DNS (or IPv6). Interesting problems result from that.
johnmaguire 15 hours ago [-]
> Just found out that the systemd developers don't understand DNS (or IPv6).

Just according to Github, systemd has over 2,300 contributors. Which ones are you referring to?

And more to the point, what is this supposed to mean? Did you encounter a bug or something? DNS on Linux is sort of famously a tire fire, see for example https://tailscale.com/blog/sisyphean-dns-client-linux ... IPv6 networking is also famously difficult on Linux, with many users still refusing to even leave it enabled, frustratingly for those of us who care about IPv6.

crote 18 hours ago [-]
Well, what's the alternative?

It is basic knowledge that memory safety bugs are a significant source of vulnerabilities, and by now it well-established that the first developer who can avoid C without introducing memory safety bugs hasn't been born yet. In other words: if you care about security at all, continuing with the status quo isn't an option.

The C ecosystem has tried to solve the problem with a variety of additional tooling. This has helped a bit, but didn't solve the underlying problem. The C community has demonstrated that it is both unwilling and unable to evolve C into a memory-safe language. This means that writing additional C code is a Really Bad Idea.

Software has to be maintained. Decade-old battle-tested codebases aren't static: they will inevitably require changes, and making changes means writing additional code. This means that your battle-tested C codebase will inevitably see changes, which means it will inevitably see the introduction of new memory safety bugs.

Google's position is that we should simply stop writing new code in C: you avoid the high cost and real risk of a rewrite, and you also stop the neverending flow memory safety bugs. This approach works well for large and modular projects, but doing the same in coreutils is a completely different story.

Replacing battle-tested code with fresh code has genuine risks, there's no way around that. The real question is: are we willing to accept those short-term risks for long-term benefits?

And mind you, none of this is Rust-specific. If your application doesn't need the benefits of C, rewriting it in Python or Typescript or C# might make even more sense than rewriting it in Rust. The main argument isn't "Rust is good", but "C is terrible".

eviks 15 hours ago [-]
But the result of the battle test is the reason to throw the crippled veteran away!
chillingeffect 18 hours ago [-]
Even worse, the license requirements (gpl->mit) will be less beneficial to the community than the rust replacements.
steveklabnik 18 hours ago [-]
Rust has no specific license requirements on code written in it. People choose whatever license they prefer.
chillingeffect 13 hours ago [-]
True, but you might want to look into the licenses people are actually choosing for Rust versions of coreutils/uutils and who's promoting them.
steveklabnik 12 hours ago [-]
Sure, those authors chose that license because they did not really particularly care for the politics of licenses and chose the most common one in the Rust ecosystem, which is MIT/Apache 2.

If folks want more Rust projects under licenses they prefer, they should start those projects.

kstrauser 10 hours ago [-]
I released my most recent Rust project under the GPLv3. The first issue was someone asking me to relicense it under MIT. I politely declined.

I bring this up because no matter what you choose, someone will wish it was otherwise.

kstrauser 18 hours ago [-]
I agree with everything you've said here, except that the reality of speaking with a "rust first" developer is making me feel suddenly ancient. But that aside, the memory safety parts are a huge benefit, but far from the only one. Option and Result types are delightful. Exhaustive matching expressions that won't compile if you add a new variant that's not handled are huge. Types that make it impossible to accidentally pass a PngImage into a function expecting a str, even though they might both be defined as contiguous series of bytes down deep, makes lots of bugs impossible. A compiler that gives you freaking amazing error messages that tell you exactly what you did wrong and how you can fix it sets the standard, from my experience. And things like "cargo clippy" which tell you how you could improve your code, even if it's already working, to make it more efficient or more idiomatic, are icing on the cake.

People so often get hung up on Rust's memory safety features, and dismiss is as through that's all it brings to the table. Far from it! Even if Rust were unsafe by default, I'd still rather use it that, say, C or C++ to develop large, robust apps because it has a long list of features that make it easy to write correct code, and really freaking challenging to write blatantly incorrect code.

Frankly, I envy you, except that I don't envy what it's going to be like when you have to hack on a non-Rust code base that lacks a lot of these features. "What do you mean, int overflow. Those are both constants! How come it didn't let me know I couldn't add them together?"

antonvs 18 hours ago [-]
Much of the drive to rewrite software in Rust is a reaction to the decades-long dependence on C and C++. Many people out there sit in the burning room like the dog in that meme, saying "this is fine". Most of them don't have to deal at all directly with the consequences involved.

Rust is the first language for a long time with a chance at improving this situation. A lot of the pushback against evangelism is from people who simply want to keep the status quo, because it's what they know. They have no concept of the systemic consequences.

I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.

razighter777 17 hours ago [-]
> I'd rather see over-the-top evangelism than the lack of it, because the latter implies that things aren't going to change very fast.

No new technology should be an excuse to engage in unprofessional conduct.

When you propose changes to software, you listen to feedback, provide analysis of the benefits and detriments, and make an informed decision.

Rust isn't special, and isn't a pass to cause endless heartache for end users and developers because your code is in a "safer" language.

New rust code should be held to the same standards as new C and C++ code that causes breakage.

Evangelism isn't useful here, let the tool speak for itself.

eggy 18 hours ago [-]
If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada. The SPARK programming language, a subset of Ada, was used for the development of safety-critical software in the Eurofighter Typhoon, a British and European fighter jet. The software for mission computers and other systems was developed by BAE Systems using the GNAT Pro environment from AdaCore, which supports both Ada and SPARK. It's not just choosing the PL, but the whole environment including the managers.

This is an interesting read on software projects and failure: https://spectrum.ieee.org/it-management-software-failures

transpute 17 hours ago [-]
Nvidia evaluated Rust and then chose SPARK/Ada for root of trust for GPU market segmentation licensing, which protects 50% profit margin and $4T market cap.

"Nvidia Security Team: “What if we just stopped using C?”, 170 comments (2022), https://news.ycombinator.com/item?id=42998383

antonvs 12 hours ago [-]
> If you were right, then people should not be using Rust or C/C++. They should be using SPARK/Ada.

Not all code needs that level of assurance. But almost all code can benefit from better memory safety than C or C++ can reliably provide.

Re what people "should" be using, that's why I chose my words carefully and wrote, "Rust is the first language for a long time with a chance at improving this situation."

Part of the chance I'm referring to is the widespread industry interest. Despite the reaction of curmudgeons on HN, all the hype around Rust is a good thing for wider adoption.

We're always going to have people resistant to change. They're always going to use any excuse to complain, including "too much hype!" It's meaningless noise.

layer8 18 hours ago [-]
You can’t change things faster than persuading the people that maintain the things. Over-the-top evangelism doesn’t work well for persuasion.
crote 18 hours ago [-]
On the other hand, the presence of an alternative is the persuasion.

It's very easy to justify for yourself why you aren't addressing the hard problems in your codebase. Combine that with a captive audience, and you end up with everyone running the same steaming heap of technical debt and being unhappy about it.

But the second an alternative starts to get off the ground there's suddenly a reason to address those big issues: people are leaving, and it is clear that complacency is no longer an option. Either evolve, or accept that you'll perish.

antonvs 18 hours ago [-]
That was probably a mischaracterization on my part. I wouldn't consider rewriting almost everything useful that's currently in C or C++ to be over the top. That would be a net good.

Posts that say "I rewrote X in Rust!" shouldn't actually be controversial. Every time you see one, you should think to yourself wow, the software world is moving towards being more stable and reliable, that's great!

uecker 18 hours ago [-]
But it is nonsense. Every time some rewrote something (in Rust or anything else), I instead worry about what breaks again, what important feature is lost for the next decade, how much working knowledge is lost, what muscle memory is now useless, what documentation is outdated, etc.

I also doubt Rust brings as many advantages in terms of stability that people claim. The C code I rely on in my daily work basically never fails (e.g. I can't remember "vim" ever crashing on me in the last 30 years I use it). That this is all rotten code C that needs to be written is just nonsense. IMHO it would far more useful to invest in proper maintenance and incremental improvements.

antonvs 12 hours ago [-]
Regarding VIM - it's not as risky as something that's exposed over a network, but it's had plenty of CVEs, and skimming them shows many if not most are related to memory safety. See:

https://www.cvedetails.com/vulnerability-list/vendor_id-8218...

antonvs 12 hours ago [-]
You want the computing infrastructure to remain essentially as it was in the 1970s. I don't.
raxxorraxor 17 hours ago [-]
Sometimes good things are ruined by people around. I think Rust is fine, although I doubt its constraints are universally true and sensible in all scenarios.

This is also not an endorsement of C/C++.

pjmlp 18 hours ago [-]
Except in many of such cases, like here in apt, any compiled language with GC/RC would do.

This is the kind of UNIX stuff that we would even write in Perl or Tcl back in the day.

crote 18 hours ago [-]
> It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious.

The problem is that those ports aren't supported and see basically zero use. Without continuous maintainer effort to keep software running on those platforms, subtle platform-specific bugs will creep in. Sometimes it's the application's fault, but just as often the blame will lie with the port itself.

The side-effect of ports being unsupported is that build failures or test failures - if they are even run at all - aren't considered blockers. Eventually their failure becomes normal, so their status will just be disregarded as noise: you can't rely on them to pass when your PR is bug-free, so you can't rely on their failure to indicate a genuine issue.

kevin_thibedeau 19 hours ago [-]
> Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for

This will be an impediment for new architectures in the future. Instead of just "builds with gcc" we would need to wait for Rust support.

crote 17 hours ago [-]
> Instead of just "builds with gcc" we would need to wait for Rust support.

There's always rustc_codegen_gcc (gcc backend for rustc) and gccrs (Rust frontend for gcc). They are't quite production-ready yet, but there's a decent chance it's good enough for the handful of hobbyists wanting to run the latest applications on historical hardware.

As to adding new architectures: it just shifts the task from "write gcc backend" to "write llvm backend". I doubt it'll make much of a difference in practice.

oconnor663 16 hours ago [-]
> to rewrite some feature for a tiny security benefit

For what it's worth, the zero->one introduction of a new language into a big codebase always comes with a lot of build changes, downstream impact, debate, etc. It's good for that first feature to be some relatively trivial thing, so that it doesn't make the changes any bigger than they have to be, and so that it can be delayed or reverted as needed without causing extra trouble. Once everything lands, then you can add whatever bigger features you like without disrupting things.

No comment on the rest of the thread...

Aurornis 19 hours ago [-]
> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit. And doing so on an unacceptably short timeline. Introducing breakage like this is unacceptable. I’m Normally I’d agree, but the ports in question are really quite old and obscure. I don’t think anything would have changed with an even longer timeline.

I think the best move would have been to announce deprecation of those ports separately. As it was announced, people who will never be impacted by their deprecation are upset because the deprecation was tied to something else (Rust) that is a hot topic.

If the deprecation of those ports was announced separately I doubt it would have even been news. Instead we’ve got this situation where people are angry that Rust took something away from someone.

steveklabnik 19 hours ago [-]
Those ports were never official, and so aren't being deprecated. Nothing changes about Debian's support policies with this change.

EDIT: okay so I was slightly too strong: some of them were official as of 2011, but haven't been since then. The main point that this isn't deprecating any supported ports is still accurate.

Aurornis 19 hours ago [-]
That’s helpful info, but I don’t think it will change any of the minds that are angry about what they see as Rust taking something away from someone.

It’s the way the two actions were linked that caused the controversy.

foota 19 hours ago [-]
*It disregards the importance of ports. Even if an architecture isn't widely used, supporting multiple architectures can help reveal bugs in the original implementation that wouldn't otherwise be obvious."

Imo this is true for going from one to a handful, but less true when going from a handful to more. Afaict there are 6 official ports and 12 unofficial ports (from https://www.debian.org/ports/).

wtallis 18 hours ago [-]
It really comes down to which architectures you're porting to. The two biggest issues are big endian vs little endian, and memory consistency models. Little endian is the clear winner for actively-developed architectures, but there are still plenty of vintage big endian architectures to target, and it looks like IBM mainframes at least are still exclusively big endian.

For memory consistency, Alpha historically had value as the weakest and most likely to expose bugs. But nobody really wants to implement hardware like that anymore, almost everything falls somewhere on the spectrum of behavior bounded by x86 (strict) and Arm (weaker), and newer languages (eg. C++ 11) mean newer code can be explicit about its expectations rather than ambiguous or implicit.

dathinab 18 hours ago [-]
> and the author didn't provide them the opportunity to provide feedback on the change.

this is wrong, the author wrote a mail about _intended_ changes _1/2 year_ before shipping them on the right Debian mailing list. That is _exactly_ how giving people an opportunity to give feedback before doing a change works...

Sure, they made it clear they don't want any discussions to be side tracked about a topics about thing Debian doesn't official support. That is not nice, but understandable, I have seen way too much time wasted on discussions being derailed.

The only problem here is people overthinking things and/or having issues with very direct language IMHO.

> This is breaking support for multiple ports to rewrite some feature for a tiny security benefit

It's not braking anything supported.

The only thing breaking are unsupported. And are only niche used too.

Nearly all projects have very limited capacities and have to draw boundaries, and the most basic boundary is unsupported means unsupported. This doesn't mean you don't keep unsupported use cases in mind/avoid accidentally breaking them, but it means they don't majorly influence your decision.

> And doing so on an unacceptably short timeline

1/2 a year for a change which only breaks unsupported things isn't "unacceptably short", it's actually pretty long. If this weren't OSS you could be happy about one month and most likely less. People complain about how little resources OSS projects have, but the scary truth is most commercial projects have even less resource and must ship at a dead line. Hence why it's very common for them to be far worse when it comes to code quality, technical dept, not correctly handled niche error cases etc.

> to every architecture they release for

Rust toolchain has support for every architecture _they_ release for, it breaks architectures niche unofficial 3rd party projects support. Which is sad, sure, but unsupported is in the end unsupported.

> cost-benefit analysis done for this change.

Who says it wasn't done at all. People have done so over and over on the internet for all kind of Linux distributions. But either way, you wouldn't include that in a mail announcing an intend for change (as you don't want discussions to be side tracked). Also benefits are pretty clear:

- using Sequoia for PGP seems to be the main driving force behind this decision, this projects exists because of repeating running into issues (including security issues) with the existing PGP tooling. It happens to use rust, but if there where no rust it still would exist. Just using a different language.

- some file format parsing is in a pretty bad state to a point you most likely will rewrite it to fix it/make it robust. When anyway doing so it using rust if preferable.

- and long term: Due to the clear, proven(1), benefits of using rust for _new_ project/code increasingly more use it, by not "allowing" rust to be required Debian bars itself form using any such project (like e.g. Sequoia which seems to be the main driver behind this change)

> this "rewrite it in rust" evangilism

which isn't part of this discussion at all,

the main driving part seems to be to use Sequoia, not because Sequoia is in rust but because Sequoia is very well made and well tested.

Similar Sequoia isn't a "lets re-write everything in rust project" but a "state of PGP tooling" is so painful for certain use cases (not all) in ways you can't fix by trying to contribute upstream that some people needed a new tooling, and rust happened to be the choice for implementing that.

ForHackernews 17 hours ago [-]
> Canonical or debian should work on porting the rust toolchain (ideally with tier 1 support) to every architecture they release for, and actually put the cart before the horse.

They already have a Rust toolchain for every system Debian releases for.

The only architectures they're arguing about are non-official Debian ports for "Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4)", two of which are so obscure I've never even heard of them and one of the others most famous for powering retro video game systems like Sega Genesis.

pjmlp 18 hours ago [-]
I fully agree, and in what concerns command line utility applications I see no benefit of using Rust's borrow checker.

At most if a rewrite would happen, it makes much more sense in a compiled language with automatic resource management.

crote 17 hours ago [-]
Command line utilities often handle not-fully-trusted data, and are often called from something besides an interactive terminal.

Take for example git: do you fully trust the content of every repository you clone? Sure, you'll of course compile and run it in a container, but how prepared are you for the possibility of the clone process itself resulting in arbitrary code execution?

The same applies to the other side of the git interaction: if you're hosting a git forge, it is basically a certainty that whatever application you use will call out to git behind the scenes. Your git forge is connected to the internet, so anyone can send data to it, so git will be processing attacker-controlled data.

There are dozens of similar scenarios involving tools like ffmpeg, gzip, wget, or imagemagick. The main power of command line utilities is their composability: you can't assume it'll only ever be used in isolation with trusted data!

pjmlp 17 hours ago [-]
None of that requires a borrow checker.

Any memory safe compiled managed language will do.

crote 17 hours ago [-]
That's definitely true!

Some people might complain about the startup cost of a language like Java, though: there are plenty of scripts around which are calling command-line utilities in a very tight loop. Not every memory-safe language is suitable for every command-line utility.

pjmlp 16 hours ago [-]
Java is not the only option, and even then, GraalVM and OpenJ9 exist, long are the days people had to pay for something like Excelsior JET.
eggy 18 hours ago [-]
I totally agree. In reality, today, if you want to produce auditable high-integrity, high-assurance, mission-critical software, you should be looking at SPARK/Ada and even F* (fstar). SPARK has legacy real world apps and a great eco system for this type of sofware. F* is being used on embedded and in other realworld apps where formal verification is necessary or highly advantageous. Whether I like Rust or not, should not be the defining factor. AdaCore has a verifed Rust compiler, but the tooling around it does not compare to that around SPARK/Ada. I've heard younger people complain about PLs being verbose, boring, or not their thing, and unless you're a diehard SPARK/Ada person, you probably feel that way about it too. But sometimes the tool doesn't have to be sexy or the latest thing to be the right thing to use. Name one Rust realworld app older than 5 years that is in this category.
crote 17 hours ago [-]
> Name one Rust realworld app older than 5 years that is in this category.

Your "older than 5 years" requirement isn't really fair, is it? Rust itself had its first stable release barely 10 years ago, and mainstream adoption has only started happening in the last 5 years. You'll have trouble finding any "real-world" Rust apps older than 5 years!

As to your actual question: The users of Ferrocene[0] would be a good start. It's Rust but certified for ISO 26262 (ASIL D), IEC 61508 (SIL 4) and IEC 62304 - clearly someone is interested in writing mission-critical software in Rust!

[0]: https://ferrocene.dev/

eggy 16 hours ago [-]
The point was how would you justify choosing Rust based on any real world proof. Maybe it will be ready in a few years, but even then it is far from achieving what you already have in SPARK along with proven legacy. I am very familiar with this, and I still chose SPARK/Ada instead of Rust. SPARK is already certified for all of this. And aerospace, railway, and other high-integrity app industries are already familiar with the output of the SPARK tools, so there's less friction and time in auditing them for certification. Aside from AdaCore, who collaborated with Ferrocene, to get a compiler certified I don't see much traction to change our decision. We are creating show control software for cyber-physical systems with potential dire consequences, so we did a very in-depth study Q1 2025, and Rust came up short.
locknitpicker 19 hours ago [-]
> I love and use rust, it is my favorite language and I use it in several of my OSS projects but I'm tired of this "rewrite it in rust" evangilism and the reputational damage they do to the rust community.

This right here.

As a side-note, I was reading one of Cloudflare's docs on how it implemented its firewall rules, and it's so utterly disappointing how the document stops being informative suddenly start to reads like a parody of the whole cargo cult around Rust. Rust this, Rust that, and I was there trying to read up on how Cloudflare actually supports firewall rules. The way they focus on a specific and frankly irrelevant implementation detail conveys the idea things are ran by amateurs that are charmed by a shiny toy.

Aurornis 20 hours ago [-]
> I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers", but instead positioned only on the merits of the change.

The wording could have been better, but I don’t see it as a dig. When you look at the platforms that would be left behind they’re really, really old.

It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog. Any project that starts holding up progress to retain support for some very old platforms would be making a mistake.

It might have been better to leave out any mention of the old platforms in the Rust announcement and wait for someone to mention it in another post. As it was written, it became an unfortunate focal point of the announcement despite having such a small impact that it shouldn’t be a factor holding up progress.

miladyincontrol 19 hours ago [-]
Not just really, really old, but they in fact have long since been depreciated in any semblance of official support.

I get the friction especially for younger contributors, not that this is the case here. However there are architectures that havent even received a revision in their lifetime which old heads will take as personal slights for which heads must roll when presented with even the slightest of inconvenience for their hobbyist port.

Aurornis 19 hours ago [-]
I haven't seen any complaints from anyone who uses those ports personally. I would bet there's someone out there who uses Debian on those platforms, but 100% of the complaining I've seen online has been from people who don't use those ports.

It's the idea that's causing the backlash, not the impact.

jancsika 18 hours ago [-]
> The wording could have been better, but I don’t see it as a dig.

He created (or at least re-activated) a dichotomy for zero gain, and he vastly increased the expectations for what a Rust rewrite can achieve. That is very, very bad in a software project.

The evidence for both is in your next paragraph. You immediately riff on his dichotomy:

> It’s unfortunate that it would be the end of the road for them, but holding up progress for everyone to retain support for some very old platforms would be the definition of the tail wagging the dog.

(My emphasis.)

He wants to do a rewrite in Rust to replace old, craggy C++ that is so difficult to reason about that there's no chance of attracting new developers to the maintenance team with it. Porting to Rust therefore a) addresses memory safety, b) gives a chance to attract new developers to a core part of Debian, and c) gives the current maintainer a way to eventually leave gracefully in the future. I think he even made some these points here on HN. Anyone who isn't a sociopath sympathizes with these points. More importantly, accidentally introducing some big, ugly bug in Rust apt isn't at odds with these goals. It's almost an expected part of the growing pains of a rewrite plus onboarding new devs.

Compare that to "holding up progress for everyone." Just reading that phrase makes me force sensitive like a Jedi: I can feel the spite of dozens HN'ers tingling at that and other phrases in these HN comments as they sharpen their hatred, ready to pounce at the Rust Evangelists the moment this project hits a snag. (And, like any project, it will hit snags.)

1. "I'm holding on for dear life here, I need help from others and this is the way I plan to get that help"

2. "Don't hold back everyone else's progress, please"

The kind of people who hear "key party" and imagine clothed adults reciting GPG fingerprints need to comprehend that #1 and #2 are a) completely different strings and b) have very different-- let's just say magical-- effects on the behavior of even small groups of humans.

reidrac 21 hours ago [-]
> As an end user, it doesn't concern me too much ...

It doesn't concern me neither, but there's some attitude here that makes me uneasy.

This could have been managed better. I see a similar change in the future that could affect me, and there will be precedent. Canonical paying Devs and all, it isn't a great way of influencing a community.

tremon 19 hours ago [-]
I agree. It's sad to see maintainers take a "my way or the highway" approach to package maintenance, but this attitude has gradually become more accepted in Debian over the years. I've seen this play before, with different actors: gcc maintainers (regarding cross-bootstrapping ports), udev (regarding device naming, I think?), systemd (regarding systemd), and now with apt. Not all of them involved Canonical employees, and sometimes the Canonical employees were the voice of reason (e.g. that's how I remember Steve Langasek).

I'm sure some will point out that each example above was just an isolated incident, but I perceive a growing pattern of incidents. There was a time when Debian proudly called itself "The Universal Operating System", but I think that hasn't been true for a while now.

mschuster91 19 hours ago [-]
> It's sad to see maintainers take a "my way or the highway" approach to package maintenance, but this attitude has gradually become more accepted in Debian over the years.

It's frankly the only way to maintain a distribution relying almost completely on volunteer work! The more different options there are, the more expensive (both in terms of human cost, engineering time and hardware cost) testing gets.

It's one thing if you're, say, Red Hat with a serious amount of commercial customers, they can and do pay for conformance testing and all the options. But for a fully FOSS project like Debian, eventually it becomes unmaintainable.

Additionally, the more "liberty" distributions take in how the system is set up, the more work software developers have to put in. Just look at autotools, an abomination that is sadly necessary.

Onavo 19 hours ago [-]
> Canonical paying Devs and all, it isn't a great way of influencing a community.

That's kind of the point of modern open source organizations. Let corporations fund the projects, and in exchange they get a say in terms of direction, and hopefully everything works out. The bigger issue with Ubuntu is that they lack vision, and when they ram things through, they give up at the slightest hint of opposition (and waste a tremendous amount of resources and time along the way). For example Mir and Unity were perfectly fine technologies but they retired it because they didn't want to see things through. For such a successful company, it's surprising that there technical direction setting is so unserious.

https://www.reddit.com/r/linux/comments/15brwi0/why_canonica...

astrobe_ 18 hours ago [-]
> I think the whole message would be more palatable if it weren't written as a decree including the dig on "retro computers"

Yes, and more generally, as far as I am concerned, the antagonizing tone of the message, which is probably partly responsible for this micro-drama, is typical of some Rust zealots who never miss an occasion to remind C/C++ that they are dinosaurs (in their eyes). When you promote your thing by belittling others, you are doing it wrong.

cvcfdd 20 hours ago [-]
[dead]
gorgoiler 21 hours ago [-]
There are many high profile DDs who work or have worked for Canonical who are emphatically not the inverse — Canonical employees who are part of the Debian org.

The conclusion you drew is perfectly reasonable but I’m not sure it is correct, especially when in comparison Canonical is the newcomer. It could even be seen to impugn their integrity.

ndiddy 20 hours ago [-]
If you look at the article, it seems like the hard dependency on Rust is being added for parsing functionality that only Canonical uses:

> David Kalnischkies, who is also a major contributor to APT, suggested that if the goal is to reduce bugs, it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats that Klode mentioned from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates, he said, and the only ""serious usage"" of apt-ftparchive was by Klode's employer, Canonical, for its Launchpad software-collaboration platform. If those were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary for any given port.

eichin 19 hours ago [-]
Mmm, apt-ftparchive is pretty useful for cooking up repos for "in-house" distros (which we certainly thought was serious...) but those tools are already a separate binary package (apt-utils) so factoring them out at the source level wouldn't be particularly troublesome. (I was going to add that there are also nicer tools that have turned up in the last 10 years but the couple of examples I looked at depend on apt-utils, oops)
mikepurvis 19 hours ago [-]
apt-utils comes from the same top-level source package though:

https://packages.debian.org/source/sid/apt

I know you can make configure-time decisions based on the architecture and ship a leaner apt-utils on a legacy platform, but it's not as obvious as "oh yeah that thing is fully auxiliary and in a totally different codebase".

gorgoiler 20 hours ago [-]
I understand, but the comment to which I was replying implied that this keeps happening, and in general. That’s not fair to the N-1 other DDs who aren’t the subject of this LWN article (which I read!)
fn-mote 21 hours ago [-]
The most interesting criticism / idea in the article was that the parts that are intended for Rust-ification should actually be removed from core apt.

> it would be better to remove the code that is used to parse the .deb, .ar, and .tar formats [...] from APT entirely. It is only needed for two tools, apt-ftparchive and apt-extracttemplates [...]

Another interesting, although perhaps tangential, criticism was that the "new solver" currently lacks a testsuite (unit tests; it has integration tests). I'm actually kind of surprised that writing a dependency solver is a greenfield project instead of using an existing one. Or is this just a dig at something that pulls in a well-tested external module for solving?

Posted in curiosity, not knowing much about apt.

dontlaugh 19 hours ago [-]
It seems silly to say that it has no tests. If I had to pick between unit and integration tests, I'd pick integration tests every time.
sedatk 19 hours ago [-]
It has integration tests.
catlifeonmars 21 hours ago [-]
Dependency solvers are actually an area that can benefit from updating IMO.
stonemetal12 21 hours ago [-]
Given that Cargo is written in Rust, you would think there would be at least one battle tested solver that could be used. Perhaps it was harder to extract and make generic than write a new one?
steveklabnik 21 hours ago [-]
Cargo's solver incorporates concepts that .debs don't have, like Cargo features, and I'm sure that .debs have features that Cargo packages don't have either.
mikepurvis 18 hours ago [-]
Historically apt hasn't had much of a "solver". It's basically take the user's upgrade/install action, if there's some conflict or versioned requirement, go to the candidate (≈newest barring pinfile shenanigans) of the involved packages, and if there's still a conflict, bail.

It was always second-tier utilities like Aptitude that tried to search for a "solution" to conflicting packaging constraints, but this has always been outside of the core functionality, and if you accepted one of Aptitude's proposed paths, you would do so knowing that the next apt dist-upgrade was almost certainly going to hose everything again.

I think the idea in Apt-world is that it's the responsibility of the archive maintainer to at all times present a consistent index for which the newest versions of everything can coexist happily together. But this obviously breaks down when multiple archives are active on the same apt conf.

pornel 14 hours ago [-]
Cargo isn't satisfied with its own solver either. Solvers are a hard and messy problem.

The problem is theoretically NP complete (a SAT solver), but even harder than that: users also care about picking solutions that optimize for multiple criteria like minimal changes, more recent versions, minimal duplication (if multiple versions can coexist), all while having easy-to-understand errors when dependencies can't be satisfied, and with better-than-NP performance. It ends up being complex and full of compromises.

catlifeonmars 6 hours ago [-]
Go’s solver has been my favorite so far. But it relies on semver actually being meaningful.
19 hours ago [-]
cactusfrog 20 hours ago [-]
Could the rust code be transpired to readable C?
estebank 17 hours ago [-]
> readable

No, because some things that are UB in C are not in Rust, and vice versa, so any codegen has to account for that and will result in additional verbosity that you wouldn't see in "native" code.

inetknght 20 hours ago [-]
> the "new solver" currently lacks a testsuite

To borrow a phrase I recently coined:

If it's not tested then it's not Engineered.

You'd think that core tools would have proper Software Engineering behind them. Alas, it's surprising how many do not.

nofunsir 20 hours ago [-]
Unit tests does not make Software Engineering. That's simply part of the development phase, which should be the smallest phase out of all the phases involved in REAL Software Engineering, which is rarely even done these days, outside of DO-178 (et al) monotony. The entire private-to-public industry has even polluted upper management in defense software engineering into accepting SCRUM as somehow more desirable than the ability to effectively plan your requirements and execute without deviation. Yes it's possible, and yes it's even plausible. SWE laziness turns Engineers into developers. Running some auto-documentation script or a generic non-official block diagram is not the same as a Civil PE creating blueprints for a house, let alone a mile long bridge or skyscraper.
nerdponx 19 hours ago [-]
As far as I understand the idea behind scrum it's not that you don't plan, it's that you significantly shorten the planning-implementation-review cycle.
brobdingnagians 19 hours ago [-]
Perhaps that is the ideal when it was laid out, but the reality of the common implementation is that planning is dispensed with. It gives some management a great excuse to look no further than the next jira ticket, if that.

The ideal implementation of a methodology is only relevant for a small number of management who would do well with almost any methodology because they will take initiative to improve whatever they are doing. The best methodology for wide adoption is the one that works okay for the largest number of management who struggle to take responsibility or initiative.

That is to say, the methodology that requires management to take responsibility in its "lowest energy state" is the best one for most people-- because they will migrate to the lowest energy state. If the "lowest energy state" allows management to do almost nothing, then they will. If the structure allows being clueless, a lot of managers will migrate to pointy haired Dilbert manager cluelessness.

With that said; I do agree with getting products to clients quickly, getting feedback quickly, and being "agile" in adapting to requirements; but having a good plan based on actual knowledge of the requirements is important. Any strict adherence to an extreme methodology is probably going to fail in edge cases, so having the judgement of when to apply which methodology is a characteristic of good management. You've got to know your domain, know your team, and use the right tool for the job.

nofunsir 19 hours ago [-]
I've got a bridge to sell. It's made from watered-down concrete and comes with blueprints written on site. It was very important to get the implementation started asap to shorten the review cycle.
zanellato19 19 hours ago [-]
Nonsense. I know and talk to multiple Engineers all the time and they all envy our position of continuing to fix issues in the project.

Mechanical engineers having to work around other component failures all the time because their lead time is gigantic and no matter how much planning they do, failures still pop-up.

The idea that Software Engineering has more bugs is absurd. Electronic engineers, mechanical, electric, all face similar issues to what we face and normally don't have the capacity to deploy fixes as fast as we do because of real world constraints.

nofunsir 19 hours ago [-]
Not nonsense. Don't be reductive.
zanellato19 17 hours ago [-]
I think you are being reductive on your original comment. The idea of cycling planning and implementation is nothing new, and quite used on the other disciplines. Saying that agile is the problem is misguided and pointing to other engineering disciplines for "they do it better" is usually a sign that you don't talk to those engineers.

Of course we can plan things better, but implementation does inform planning and vice versa and denying that is denying reality.

nofunsir 14 hours ago [-]
I don't think this is productive, since you're so adamant [1] that "big C memory safe programs don't exist." I know for a fact they do. Most of the software you won't ever see. What do you think powers the most critical sytems in, say a fifth gen fighter, or the software that NSA relies on in their routers?

I'll give you a hint. It's neither rust- nor scrum-based. I'd rather change careers or retire than work another day doing scrum standups.

[1] https://news.ycombinator.com/item?id=45353150

surajrmal 19 hours ago [-]
Integration tests are still tests. There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests. I've written a lot of code generation tools this way for instance.
mikepurvis 18 hours ago [-]
Unit tests are for testing branchiness— what happens in condition X, what about condition Y? Does the internal state remain sane?

Integration tests are for overall sanity— do a few happy paths basically work? what about when we make changes to the packaging metadata or roll dependencies forward?

Going unit-test free makes total sense in the case of code that doesn't have much in the way of branching, or where the exceptional cases can just be an uncontrolled exit. Or if you feel confident that your type system's unions are forcing you to cover your bases. Either way, you don't need to test individual functions or modules if running the whole thing end to end gives you reasonable confidence in those.

inetknght 18 hours ago [-]
> Integration tests are still tests.

I didn't say they're not. Integration tests definitely help towards "being tested".

> There are definitely cases for tools where you can largely get by without unit tests in favor of integration tests.

Very strong disagree. I think there are no cases where a strong integration test regime can allow a software project to forego unit tests.

Now, that said, we're probably talking the same thing with different words. I think unit tests with mocks are practically useless. But mocks are the definition of most people's unit tests. Not to me; to me unit tests use real code and real objects. To me, a unit test is what a lot of people call an integration test. And, to me, what I call an integration test, is often what people call system tests or end-to-end tests.

mystifyingpoi 17 hours ago [-]
> I think unit tests with mocks are practically useless

IMO that's on the extreme side too. I've seen a fair share of JUnit monstrosities with 10+ mocks injected "because the project has been written this way so we must continue this madness", but mocking can be done right, it's just overused so much that, well, maybe you're right - it's easier to preach it out than teach how to do it right.

dv35z 21 hours ago [-]
Every time I consider learning Rust, I am thrown back by how... "janky" the syntax is. It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years. Can someone help me understand this? Why are we pushing forward with a language that has a Perl-esque unreadability...?

Comparison: I often program in Python (and teach it) - and while it has its own syntax warts & frustrations - overall the language has a "pseudocode which compiles" approach, which I appreciate. Similarly, I appreciate what Kotlin has done with Java. Is there a "Kotlin for Rust"? or another high quality system language we ought to be investing in? I genuinely believe that languages ought to start with "newbie friendliness", and would love to hear challenges to that idea.

movpasd 20 hours ago [-]
You might this blog post interesting, which argues that it's Rust semantics and not syntax that results in the noisiness, i.e.: it's intrinsic complexity:

https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html

I found it reasonably convincing. For what it's worth, I found Rust's syntax quite daunting at first (coming from Python as well), but it only took a few months of continuous use to get used to it. I think "Perl-esque" is an overstatement.

It has some upsides over Python as well, notably that the lack of significant whitespace means inserting a small change and letting the autoformatter deal with syntax changes is quite easy, whereas in Python I occasionally have to faff with indentation before Black/Ruff will let me autoformat.

I appreciate that for teaching, the trade-offs go in the other direction.

xscott 20 hours ago [-]
I'm not sure which of the dozen Rust-syntax supporters I should reply to, but consider something like these three (probably equivalent) syntaxes:

    let mut a = Vec::<u32>::new();
    let mut b = <Vec::<u32>>::new();
    let mut c = <Vec<u32>>::new();
    let mut d: Vec<u32> = Vec::new();
Which one will your coworker choose? What will your other corworkers choose?

This is day one stuff for declaring a dynamic array. What you really want is something like:

    let mut z = Vec<u32>::new();
However, the grammar is problematic here because of using less-than and greater-than as brackets in a type "context". You can explain that as either not learning from C++'s mistakes or trying to appeal to a C++ audience I guess.

Yes, I know there is a `vec!` macro. Will you require your coworkers to declare a similar macro when they start to implement their own generic types?

There are lots of other examples when you get to what traits are required to satisfy generics ("where clauses" vs "bounds"), or the lifetime signature stuff and so on...

You can argue that strong typing has some intrinsic complexity, but it's tougher to defend the multiple ways to do things, and that WAS one of Perl's mantras.

gpm 19 hours ago [-]
This is like complaining that in C you can write

    a->b
    (a->b)
    (*a).b
    ((*a).b)
Being able to use disambiguated syntaxes, and being able to add extra brackets, isn't an issue.

PS. The formatting tooling normalizes your second and third example to the same syntax. Personally I think it ought to normalize both of them to the first syntax as well, but it's not particularly surprising that it doesn't because they aren't things anyone ever writes.

xscott 18 hours ago [-]
> This is like complaining that in C [...]

It's really not. Only one of my examples has the equivalent of superfluous parens, and none are dereferencing anything. And I'm not defending C or C++ anyways.

When I was trying to learn Rust (the second time), I wanted to know how to make my own types. As such, the macro `vec!` mentioned elsewhere isn't really relevant. I was using `Vec` to figure things out so I could make a `FingerTree`:

    let v: Vec<u32> = Vec::new();  // Awfully Java-like in repeating myself

    let v = Vec::new(); // Crap, I want to specify the type of Vec

    let v = Vec<u32>::new();  // Crap, that doesn't compile.
And so on...
7 hours ago [-]
duped 18 hours ago [-]
> let v = Vec::new(); // Crap, I want to specify the type of Vec

This kinda implies you've gone wrong somewhere. That doesn't mean there aren't cases where you need type annotations (they certainly exist!) but that if `Vec::new()` doesn't compile because the compiler couldn't deduce the type, it implies something is off with your code.

It's impossible to tell you exactly what the problem was, just that `<Vec<T>>::new()` is not code that you would ever see in a Rust codebase.

gpm 18 hours ago [-]
Nah, there's lots of times you need to specify the types of Vec, either because

1. You don't want the default `i32` integer type and this is just a temporary vector of integers.

2. Rust's type inference is not perfect and sometimes the compiler will object even though there's only one type that could possibly work.

Edit: The <Vec<T>>::new() syntax is definitely never used though.

ViewTrick1002 15 hours ago [-]
Or just collect::<Vec<_>>() when you have up doing everything in a lazy pattern and want a concrete type again.

Which I guess i typical stumbling block when the compiler can’t infer what type to collect into.

iknowstuff 19 hours ago [-]
Most likely

    let e = Vec::new()
or

    let f = vec![]
rustc will figure out the type
J_Shelby_J 19 hours ago [-]
exactly. you specify types for function parameters and structs and let the language do it's thing. it's a bit of a niche to specify a type within a function...

There is a reason the multiple methods detailed above exist. Mostly for random iterator syntax. Such as summing an array or calling collect on an iterator. Most Rust devs probably don't use all of these syntax in a single year or maybe even their careers.

vablings 19 hours ago [-]
I can't believe that a flexible powerful syntax is considered limiting or confusing by some people. There is way more confusing edge-case syntax keywords in C++ that are huge foot-guns.
xscott 18 hours ago [-]
Do these print statements print the same thing?

    let i = 1;
    let j = 1;
    print!("i: {:?}\n", !i);
    print!("j: {:?}\n", !j);

    let v = vec![1, 2, 3];
    v[i];
There are definitely times you want to specify a type.
soiltype 13 hours ago [-]
> There are definitely times you want to specify a type.

So I'm coming from basically obly TypeScript type system experience but that seems completely ok to me. There are times I make my TS uglier to make it less ambiguous and times I make it more ambiguous to make it more readable. It's unreasonable imo that such a system could universally land on the most readable format even if we could all agree what's most readable. Instead, some cases are going to be tradeoffs so that the more common cases can flow unimpeded.

xscott 4 hours ago [-]
The Rust example I showed changes behavior when you declare the types. It's not just a readability or bug catching thing.
bobbylarrybobby 19 hours ago [-]
I've only ever seen `a` and `d`. Personally I prefer `a`. The only time I've seen `c` is for trait methods like `<Self as Trait<Generic>>::func`. Noisy? I guess. Not sure how else this could really be written.
xscott 19 hours ago [-]
Fwiw, I didn't go looking for obscure examples to make HN posts. I've had three rounds of sincerely trying to really learn and understand Rust. The first was back when pointer types had sigils, but this exact declaration was my first stumbling block on my second time around.

The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.

bobbylarrybobby 9 hours ago [-]
Rust’s inference is generally a strength. If there's a type-shaped hole to fill, and only one way to fill it, Rust will just do it. So for instance `takes_a_vec(some_iter.collect())` works even though `collect` has a generic return type — being passed to `takes_a_vec` implies it must be a Vec, and so that's what Rust infers.
gpm 18 hours ago [-]
> The first version I got working was `d`, and my first thought was, "you're kidding me - the right hand side is inferring it's type from the left?!?" I didn't learn about "turbo fish" until some time later.

Tbh d strikes me as the most normal - right hand sides inferring the type from the left exists in basically every typed language. Consider for instance the C code

    some_struct a = { .flag = true, .value = 123, .stuff = 0.456 };
Doing this inference at a distance is more of a feature of the sml languages (though I think it now exists even in C with `auto`) - but just going from left to right is... normal.
xscott 18 hours ago [-]
I see your point, and it's a nice example, but not completely parallel to the Rust/StandardML thing. Here, your RHS is an initializer, not a value.

    // I don't think this flies in C or C++,
    // even with "designated initializers":
    f({ .flag = true, .value = 123, .stuff=0.456});

    // Both of these "probably" do work:
    f((some_struct){ .flag = true, ... });
    f(some_struct{ .flag = true, ... });

    // So this should work too:
    auto a = (some_struct){ .flag = true, ... };
Take all that with a grain of salt. I didn't try to compile any of it for this reply.

Anyways, I only touched SML briefly 30 some years ago, and my reaction to this level of type inference sophistication in Rust went through phases of initial astonishment, quickly embracing it, and eventually being annoyed at it. Just like data flows from expressions calculating values, I like it when the type inference flows in similarly obvious ways.

Aurornis 20 hours ago [-]
> Which one will your coworker choose? What will your other corworkers choose?

I don’t think I’ve ever seen the second two syntaxes anywhere.

I really don’t think this is a problem.

16 hours ago [-]
dontlaugh 19 hours ago [-]
This will be the case in any language with both generics and type inference. It's nothing to do specifically with Rust.
18 hours ago [-]
steveklabnik 20 hours ago [-]
I mean, the fact that you mention "probably equivalent" is part of the reality here: Nobody writes the majority of these forms in real code. They are equivalent, by the way.

In real code, the only form I've ever seen out of these in the wild is your d form.

xscott 19 hours ago [-]
This is some True Scotsman style counter argument, and it's hard for me to make a polite reply to it.

There are people who program with a "fake it till you make it" approach, cutting and pasting from Stack Overflow, and hoping the compiler errors are enough to fix their mess. Historically, these are the ones your pages/books cater to, and the ones who think the borrow checker is the hard part. It doesn't surprise me that you only see code from that kind of beginner and experts on some rust-dev forum and nothing in between.

steveklabnik 19 hours ago [-]
The issue though is that this isn't a solvable "problem". This is how programming languages' syntax work. It's like saying that C's if syntax is bad because these are equivalent:

  if (x > y) {

  if ((x > y)) {

  if (((x) > (y))) {
Yes, one of your co-workers may write the third form. But it's just not possible for a programming language to stop this from existing, or at least, maybe you could do it, but it would add a ton of complexity for something that in practice isn't a problem.
xscott 19 hours ago [-]
Only `b` has the equivalent of "superfluous parens".

It's practically your job to defend Rust, so I don't expect you to budge even one inch. However, I hate the idea of letting you mislead the casual reader that this is somehow equivalent and "just how languages work".

The grammar could've used `Generic[Specific]` with square brackets and avoided the need for the turbo fish.

steveklabnik 18 hours ago [-]
It hasn't been my job to work on Rust in for years now. And even then, it was not to "defend" Rust, but to write docs. I talk about it on my own time, and I have often advocated for change in Rust based on my conversations with users.

If you're being overly literal, yes, the <>s are needed here for this exact syntax. My point was not about this specific example, it's that these forms are equivalent, but some of them are syntactically simpler than others. The existence of redundant forms does not make the syntax illegitimate, or overly complex.

For this specific issue, if square brackets were used for generics, then something else would have to change for array indexing, and folks would be complaining that Rust doesn't do what every other language does here, which is its own problem.

xscott 17 hours ago [-]
> For this specific issue, if square brackets were used for generics, then something else would have to change for array indexing

The compiler knows when the `A` in `A[B]` is a type vs a variable.

steveklabnik 17 hours ago [-]
A compiler could disambiguate, but the goal is to have parsing happen without knowing if A is a type or a variable. That is the inappropriate intertwining of parsing and semantics that languages are interested in getting away from, not continuing with.

Anyway, just to be clear: not liking the turbofish is fine, it's a subjective preference. But it's not an objective win, that's all I'm saying. And it's only one small corner of Rust's syntax, so I don't think that removing it would really alleviate the sorts of broad objections that the original parent was talking about.

16 hours ago [-]
kstrauser 18 hours ago [-]
> The grammar could've used `Generic[Specific]` with square brackets and avoided the need for the turbo fish.

But then people would grouse about it using left-bracket and right-bracket as brackets in a type "context".

GoblinSlayer 16 hours ago [-]
The problem here is that angle brackets are semantics dependent syntax. Whether they are brackets or not depends on semantic context. Conversely square brackets are always brackets.
steveklabnik 16 hours ago [-]
Square brackets would be semantically dependent if they appeared in the same position of angle brackets. There's nothing magical about [] that makes the problems with <> disappear.
GoblinSlayer 15 hours ago [-]
It disappears the problem that angle brackets are sometimes not brackets. I.e. a<b>c is parsed as (a<b)>c or as (a(<b>))c.
xscott 14 hours ago [-]
It also comes up when you want compile time expressions as parameters to your generics:

    // nice and clean
    let a = Generic[T, A > B]::new(); 

    // gross curlies needed because of poor choices
    let a = Generic::<T, {A > B}>::new();
kstrauser 15 hours ago [-]
So that’s the Specificth element of Generic?
GoblinSlayer 15 hours ago [-]
It's Brackets(Generic,Specific).
xscott 18 hours ago [-]
Lol, yes they would. However, I wouldn't. :-)
dragonwriter 19 hours ago [-]
Well, the solution usually isn't in syntax, but it often is solved by way of code formatters, which can normalize the syntax to a preferred form among several equivalent options.
steveklabnik 19 hours ago [-]
I certainly would support rustfmt turning those redundant forms into the simpler one.
estebank 17 hours ago [-]
I suspect rustfmt would consider this out of scope, but there should be a more... "adventurous" code formatter that does more opinionated changes. On the other hand, you could write a clippy lint today and rely on rustfix instead
Aurornis 18 hours ago [-]
Agree. This isn't really a problem unless you also think that extra parentheses is a problem.

In many languages you could write:

> if (a + b) > (c + d)

or

> if a + b > c + d

And they're equivalent. Yet nobody complains that there are too many options.

19 hours ago [-]
jandrese 20 hours ago [-]
I think Perl-esque is apt, but that's because I've done quite a bit of Perl and think the syntax concerns are overblown. Once you get past the sigils on the variables Perl's syntax is generally pretty straightforward, albeit with a few warts in places like almost every language. The other area where people complained about Perl's opaqueness was the regular expressions, which most languages picked up anyway because people realized just how useful they are.
echelon 20 hours ago [-]
That's it exactly.

Once you're writing Rust at full speed, you'll find you won't be putting lifetimes and trait bounds on everything. Some of this becomes implicit, some of it you can just avoid with simpler patterns.

When you write Rust code without lifetimes and trait bounds and nested types, the language looks like Ruby lite.

When you write Rust code with traits or nested types, it looks like Java + Ruby.

When you sprinkle in the lifetimes, it takes on a bit of character of its own.

It honestly isn't hard to read once you use the language a lot. Imagine what Python looks like to a day zero newbie vs. a seasoned python developer.

You can constrain complexity (if you even need it) to certain modules, leaving other code relatively clean. Imagine the Python modules that use all the language features - you've seen them!

One of the best hacks of all: if you're writing HTTP services, you might be able to write nearly 100% of your code without lifetimes at all. Because almost everything happening in request flow is linear and not shared.

carlmr 20 hours ago [-]
>When you write Rust code without lifetimes and trait bounds and nested types, the language looks like Ruby lite.

And once you learn a few idioms this is mostly the default.

SoftTalker 20 hours ago [-]
This honestly reads like the cliche "you just don't get it yet" dismissals of many rust criticisms.
echelon 19 hours ago [-]
Not at all!

I'm trying to sell Rust to someone who is worried about it. I'm not trying to sound elitist. I want people to try it and like it. It's a useful tool. I want more people to have it. And that's not scaring people away.

Rust isn't as hard or as bad as you think. It just takes time to let it sink in. It's got a little bit of a learning curve, but that pain goes away pretty quick.

Once you've paid that down, Rust is friendly and easy. My biggest gripe with Rust is compile times with Serde and proc macros.

xscott 17 hours ago [-]
> Rust isn't as hard or as bad as you think.

I think this depends a LOT on what you're trying to do and what you need to learn to do it. If you can get by with the std/core types and are happy with various third party crates, then you don't really need to learn the language very deeply.

However, if you want to implement new data structures or generic algorithms, it gets very deep very quickly.

echelon 9 hours ago [-]
Why would you say that? I feel this pushes people away.

"Hey, you might be able to use Rust trivially if you stick to XYZ, but if you dare touch systems programming you're in for some real hurt. Dragons everywhere."

Why say that? It's not even remotely true - it's a gradient of learning. You can use Rust for simple problems as a gateway into systems programming.

Rust is honestly a great alternative to Python or Golang for writing servers. Especially given that you can deploy static binaries or WASM.

We need more people learning the language, not to scare them away.

Rust is getting easier year over year, too! People can choose Rust for their problems today and not struggle.

Give them a cookie and let them see for themselves.

xscott 6 hours ago [-]
> Why would you say that? I feel this pushes people away.

It's not my obligation to evangelize for your pet language. I've spent enough time and written enough code in Rust to have a defensible viewpoint. This is public forum - I'll share my opinion if I want to.

Writing servers? Sure, go grab the crate that solves your problem and get on with it. Basically what I said above.

If I thought you would bother to do them, I could give you a list of concrete problems which ought to be super easy but are in fact really hard or ugly to do in Rust.

Phrases like "systems programming" have become so diluted that I'm not even sure what you mean. Once upon a time, that was something like writing a device driver. Now people use the phrase for things like parsing a log file or providing a web server.

I wanted to use Rust for numerical methods and data visualization. I didn't like the existing solutions, so I was willing to write my own Rust libraries from scratch. It was pretty painful, and the learning curve was steep.

> Why say that? It's not even remotely true

I didn't write the thing you quoted. Using a straw man argument like this is a lame tactic.

mrweasel 20 hours ago [-]
That article is really good, because it highlight that Rust doesn't have to look messy. Part of the problem, I think, is that there's a few to many people who think that messy version is better, because it "uses more of the language" and it makes them look smarter. Or maybe Rust just makes it to hard to see through the semantics and realize that just because feature is there doesn't mean that you need it.

There's also a massive difference between the type of C or Perl someone like me would write, versus someone trying to cope with a more hostile environment or who requires higher levels of performance. My code might be easier to read, but it technically has issue, they are mostly not relevant, while the reverse is true for a more skilled developer, in a different environment. Rust seems to attract really skilled people, who have really defensive code styles or who use more of the provided language features, and that makes to code harder to read, but that would also be the case in e.g. C++.

enriquto 20 hours ago [-]
> I am thrown back by how... "janky" the syntax is.

Well if you come from C++ it's a breath of fresh air! Rust is like a "cleaned-up" C++, that does not carry the historical baggage forced by backwards compatibility. It is well-thought out from the start. The syntax may appear a bit too synthetic; but that's just the first day of use. If you use it for a few days, you'll soon find that it's a great, beautiful language!

The main problem with rust is that the community around it has embraced all the toxic traditions of the js/node ecosystem, and then some. Cargo is a terrifying nightmare. If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.

Concerning TFA, adding rust to apt might be a step in the right direction. But it should be symmetric: apt depends on rust, that's great! But all the rust that it depends on needs to be installed by apt, and by apt alone!

tcfhgj 20 hours ago [-]
I am coming from C++ and think Cargo is a blessing.

I like that I can just add a dependency and be done instead of having to deal with dependencies which require downloading stuff from the internet and making them discoverable for the project specific tool chain - which works differently on every operating system.

Same goes for compiling other projects.

jandrese 20 hours ago [-]
While it kinda flies under the radar, most modern C projects do have a kind of package management solution in the form of pkg-config. Instead of the wild west of downloading and installing every dependency and figuring out how to integrate it properly with the OS and your project you can add a bit of syntactic sugar to your Makefile and have that mostly handled for you, save for the part where you will need to use your platform's native package manager to install the dependencies first. On a modern system using a package on a C project just requires a Makefile that looks something like this:

    CC=clang
    MODULES=glib-2.0 atk
    CFLAGS=-g -Wall -pedantic --std=c17 `pkg-config --cflags $(MODULES)`
    LDLIBS=`pkg-config --libs $(MODULES)`

    ALL: myapp

    myapp: myapp.c utils.c io.c
adastra22 18 hours ago [-]
Unless you are using nix or something, pkg-config is comparing apples to oranges.
kataklasm 20 hours ago [-]
But that is the kind of convenience and ease of use that brings us another npm malware incident every other month at this point.
juliangmp 20 hours ago [-]
This is a real problem but I wouldn't blame the existence of good tooling on it. Sure you don't have this issue with C or C++, but thats because adding even a single dependency to a C or C++ project sucks, the tooling sucks.

I wholly blame developers who are too eager to just pull new dependencies in when they could've just written 7 lines themselves.

jandrese 20 hours ago [-]
I remember hearing a few years ago about how developers considered every line of code the wrote as a failing and talked about how modern development was just gluing otherwise maintained modules together to avoid having to maintain their own project. I thought this sounded insane and I still do.
r_lee 20 hours ago [-]
And in a way I think AI can help here, where instead you get just the snippet vs having to add that dep that then becomes a long-term security liability
krior 17 hours ago [-]
On the other hand you don't have developers handrolling their own shitty versions of common things like hashmaps or json-serializers, just because the dependencies are to hard to integrate.
JuniperMesos 18 hours ago [-]
> The main problem with rust is that the community around it has embraced all the toxic traditions of the js/node ecosystem, and then some. Cargo is a terrifying nightmare. If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.

Something I didn't appreciate for a long time is that, the C/C++ ecosystem does have an npm-like package management ecosystem - it is just implemented at the level of Linux distro maintainers DDD deciding what to package and how. Which worked ok because C was the lingua franca of Unix systems.

But actually it's valuable for programmers to be able to specify their dependencies for their own projects and update them on a schedule unconnected and uncoordinated with the OS's releases. The cargo/npm model is closer to ideal.

Of course what is even better is NixOS-like declarative specification and hashing of all dependencies

steveklabnik 20 hours ago [-]
Debian already builds Rust packages from apt, so it will satisfy that critera.
superxpro12 20 hours ago [-]
As a c/c++ cmake user, cargo sounds like a utopia in comparison. It still amazes me that c/c++ package management is still spread between about 5 different solutions.

IMO, the biggest improvement to C/C++ would be ISO defining a package manager a.la pip or uv or cargo. I'm so tired of writing cmake. just... tired.

NekkoDroid 13 hours ago [-]
Do note that a (I think standardized) common package specification is being worked on called CPS (Common Package Specification). It doesn't specify how you get your dependencies, but it does specify how they should look like, so that your actual package manager does not need to care about the build system specific formats as it currently does.
steveklabnik 12 hours ago [-]
Worth reading, from a year ago: https://www.reddit.com/r/cpp/comments/1hgpz0y/wg21_aka_c_sta...

It seems CPS is still being worked on, but not under the standardization of the committee, due to the above.

hedora 20 hours ago [-]
People that don't understand make are destined to recreate it poorly, and there's no better example than cmake, imho.

Here's my arc through C/C++ build systems:

- make (copy pasted examples)

- RTFM [1]

- recursive make for all sorts of non-build purposes - this is as good as hadoop up to about 16 machines

- autotools

- cmake

- read "recursive make considered harmful" [2]

- make + templates

Anyway, once you've understood [1] and [2], it's pretty hard to justify cmake over make + manual vendoring. If you need windows + linux builds (cmake's most-advertised feature), you'll pretty quickly realize the VS projects it produces are a hot mess, and wonder why you don't just maintain a separate build config for windows.

[1] https://www.gnu.org/software/make/manual/

[2] https://news.ycombinator.com/item?id=20014348

If I was going to try to improve on the state of the art, I'd clean up a few corner cases in make semantics where it misses productions in complicated corner cases (the problems are analogous to prolog vs datalog), and then fix the macro syntax.

If you want a good package manager for C/C++, check out Debian or its derivatives. (I'm serious -- if you're upset about the lack of packages, there's a pretty obvious solution. Now that docker exists, the packages run most places. Support for some sort of AppImage style installer would be nice for use with lesser distros.)

duped 19 hours ago [-]
cmake exists not because people didn't understand make, but because there was no one make to understand. The "c" is for "cross platform." It's a replacement for autoconf/automake, not a replacement for make.

> If I was going to try to improve on the state of the art

The state of the art is buck/bazel/nix/build2.

enriquto 20 hours ago [-]
cmake is a self-inflicted problem of some C++ users, and an independent issue of the language itself (just like cargo for rust). If you want, you can use a makefile and distribution-provided dependencies, or vendored dependencies, and you don't need cmake.
duped 19 hours ago [-]
imo the biggest single problem with C++ that the simple act of building it is not (and it seems, cannot) be standardized.

This creates kind of geographic barriers that segregate populations of C++ users, and just like any language, that isolation begets dialects and idioms that are foreign to anyone from a different group.

But the stewards of the language seem to pretend these barriers don't exist, or at least don't understand them, and go on to make the mountain ranges separating our valleys even steeper.

So it's not that CMake is a self-inflicted wound. It's the natural evolution of a tool to fill in the gaps left under specified by the language developers.

raincole 20 hours ago [-]
> Cargo is a terrifying nightmare

Really? Why? I'm not a Rust guru, but Cargo is the only part of Rust that gave me a great first impression.

freedomben 20 hours ago [-]
GP mostly answered that in the comment already:

> If you could install regular rust dependencies with "apt install" in debian stable, that would be a different story! But no. They want the version churn: continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole.

raincole 20 hours ago [-]
I don't know, it doesn't explain how and why Cargo causes "continuously adding and removing bugs, like particle/anti-particle pairs at the boundary of a black hole."
adastra22 18 hours ago [-]
They are conflating unrelated things. Cargo is a downstream result of the thing that annoys them, not the cause. What they don’t like is that rust is statically linked with strong versioned dependencies. There are pros and cons to that, but one outcome (which some list as pro and some list as con) is that you need to recompile world for every project. Hence, cargo.
steveklabnik 20 hours ago [-]
The problem, of course, is that "apt install" only works on platforms that use apt to manage their packages.
newsoftheday 19 hours ago [-]
> Rust is like a "cleaned-up" C++

Except they got the order of type and variable wrong. That alone is enough reason to never use Rust, Go, TypeScript or any other language that botches such a critical cornerstone of language syntax.

iknowstuff 19 hours ago [-]
[flagged]
grayhatter 19 hours ago [-]
That was needlessly rude.
Aurornis 21 hours ago [-]
> Comparison: I often program in Python (and teach it) - and while it has its own syntax warts & frustrations - overall the language has a "pseudocode which compiles" approach, which I appreciate.

I think this is why you don’t like Rust: In Rust you have to be explicit by design. Being explicit adds syntax.

If you appreciate languages where you can write pseudocode and have the details handled automatically for you, then you’re probably not going to enjoy any language that expects you to be explicit about details.

As far as “janky syntax”, that’s a matter of perspective. Every time I deal with Python and do things like “__slots__” it feels like janky layer upon layer of ideas added on top of a language that has evolved to support things it wasn’t originally planned to do, which feels janky to me. All of the things I have to do in order to get a performant Python program feel incredibly janky relative to using a language with first class support for the things I need to do.

mixmastamyk 18 hours ago [-]
> Being explicit adds syntax.

Not what they are talking about. Rather better to use words instead of symbols, like python over perl.

Instead of “turbofish” and <‘a>, there could be more key words like mut or dyn. Semicolons and ‘c’har are straight out of the seventies as well. :: not useful and ugly, etc.

Dunders avoid namespace collisions and are not a big problem in practice, all one char, and easy to read. I might remove the trailing part if I had the power.

Aurornis 18 hours ago [-]
This is just personal preferences and familiarity

Python using indenting to convey specific programming meaning feels janky and outdated to people not familiar with Python, but Python familiar programmers don't think twice about it.

mixmastamyk 18 hours ago [-]
No, it’s not only familiarity. I learned C/C++ in the early 90s, before Python.

It’s well studied that words are easier to read than nested symbols.

yoyohello13 18 hours ago [-]
Maybe it's my math background but I honestly prefer symbols to keywords. It's more up front cost in learning, but it's much more efficient in the long run.
mixmastamyk 18 hours ago [-]
When you are doing multiple operations to multiple variables, and need to see it all at once, math-like syntax still has benefits.

But this is not the common case for most programming, which is about detailing business rules. Explicit and verbose (though not excessively) has been shown to be the most readable/maintainable. For example, one character variable names, common in math, are heavily discouraged in professional development.

There’s another level to this as well. To me, calculus notation looks quite elegant, while Perl and (parts of) Rust look like trash. Since they are somewhat similar, the remaining angle is good taste.

morshu9001 14 hours ago [-]
I've been relying on Python for a decade+ and still think twice about the indentation. Straight up bad design, and you can't even attribute it to the original use case of dirty scripting cause it's particularly bad in a REPL.
18 hours ago [-]
morshu9001 20 hours ago [-]
Both Python and JS evolved by building on top of older versions, but somehow JS did a way better job than Python, even though Py forced a major breaking change.

Agree about Rust, all the syntax is necessary for what it's trying to do.

adastra22 18 hours ago [-]
JS is not something I would hold up in high regard.
jamespo 20 hours ago [-]
You mean typescript?
morshu9001 20 hours ago [-]
Before that. The classes and stuff added in ES6 and earlier
steveklabnik 21 hours ago [-]
Syntax tends to be deeply personal. I would say the most straightforward answer to your question is "many people disagree that it is unreadable."

Rust did build on the learnings of the past 20 years. Essentially all of its syntax was taken from other languages, even lifetimes.

handwarmers 20 hours ago [-]
Are the many who disagree that it is unreadable more than the people who agree? I have been involved with the language for a while now, and while I appreciate what you and many others have done for it, the sense that the group is immune to feedback just becomes too palpable too often. That, and the really aggressive PR.

Rust is trying to solve a really important problem, and so far it might well be one of the best solutions we have for it in a general sense. I 100% support its use in as many places as possible, so that it can evolve. However, its evolution seems to be thwarted by a very vocal subset of its leadership and community who have made it a part of their identity and whatever socio-political leverage toolset they use.

azdle 20 hours ago [-]
I've found the rust core team to be very open to feedback. And maybe I've just been using Rust for too long, but the syntax feels quite reasonable to me.

Just for my own curiosity, do you have an examples of suggestions for how to improve the syntax that have been brought up and dismissed by the language maintainers?

steveklabnik 20 hours ago [-]
> Are the many who disagree that it is unreadable more than the people who agree?

I have no way to properly evaluate that statement. My gut says no, because I see people complain about other things far more often, but I do think it's unknowable.

I'm not involved with Rust any more, and I also agree with you that sometimes Rust leadership can be insular and opaque. But the parent isn't really feedback. It's just a complaint. There's nothing actionable to do here. In fact, when I read the parent's post, I said "hm, I'm not that familiar with Kotlin actually, maybe I'll go check it out," loaded up https://kotlinlang.org/docs/basic-syntax.html, and frankly, it looks a lot like Rust.

But even beyond that: it's not reasonably possible to change a language's entire syntax ten years post 1.0. Sure, you can make tweaks, but turning Rust into Python simply is not going to happen. It would be irresponsible.

tayo42 20 hours ago [-]
> the sense that the group is immune to feedback

Is complaining about syntax really productive though? What is really going to be done about it?

simonask 20 hours ago [-]
This is such a weird take. What do you suggest? Should Rust’s syntax have been democratically decided?
gmueckl 19 hours ago [-]
Rust is almost git hyoe 2.0. That hyoe set the world up with (a) a dominant VCS that is spectacularly bad at almost everything it does compared to its competitors and (b) the dominant Github social network owned by MS that got ripped to train Copilot.

Developers have a way of running with a hyoe that can be quite disturbing and detrimental in the long run. The one difference here is that rust has some solid ideas implemented underneath. But the community proselytizing and throwing non-believers under the bus is quite real.

JuniperMesos 18 hours ago [-]
The lifetime syntax was taken from OCaml but it has somewhat different semarics than OCaml. I honestly get a bit tripped up when I look at OCaml code (a language I'm a beginner at), and see ordinary parameterized types using syntax that suggests to me, from a Rust background, "woah, complex lifetime situation ahead!"

I know that Graydon Hoare is a fan of OCaml and that it was a core inspiration for Rust, and I sometimes wonder if he gets tripped up too by having to switch between Rust-inspired and OCaml-inspired interpretations of the same characters.

steveklabnik 17 hours ago [-]
It's similar but different: both are type variables, but it's true that it's used for the "other" type variables in Rust.

For what it's worth, I am not even sure that Graydon was the one who introduced lifetime syntax. He was a fan of terseness, though: Rust's keywords used to be all five characters or shorter.

Niko and pcwalton were the ones working on regions, Niko talks a little bit about the motivation for syntax here: https://smallcultfollowing.com/babysteps/blog/2012/03/28/avo...

Later posts include /& as syntax: https://smallcultfollowing.com/babysteps/blog/2012/04/25/ref...

Eventually, another syntax: https://smallcultfollowing.com/babysteps/blog/2012/07/10/bor... which turns into a &x/ syntax: https://smallcultfollowing.com/babysteps/blog/2012/07/17/bor...

Which turns into this one, talking about variants of possible syntax: https://smallcultfollowing.com/babysteps/blog/2012/12/30/lif...

At some point, we get the current syntax: https://smallcultfollowing.com/babysteps/blog/2013/04/04/nes...

So, it happened somewhere in here...

aallaall 20 hours ago [-]
There’s syntax that is objectively easier to both read and write, and there’s syntax that is both harder to read and write. For a majority.

In general, using english words consisting of a-z is easier to read. Using regex-like mojibake is harder.

For an concrete example in rust, using pipes in lambdas, instead of an arrow, is aweful.

steveklabnik 20 hours ago [-]
Rust's pipes in lambdas come from Ruby, a language that's often regarded as having beautiful syntax.

Rust is objectively not mojibake. The equivalent here would be like using a-z, as Rust's syntax is borrowed from other languages in wide use, not anything particularly esoteric. (Unless you could OCaml as esoteric, which I do believe is somewhat arguable but that's only one thing, the argument still holds for the vast majority of the language.)

JuniperMesos 18 hours ago [-]
I don't think it's an awful choice, but I'll admit that pipes in lambdas are not my favorite bit of syntax. I'm not a fan of them in Ruby either. I personally prefer JavaScript-ish => for lambdas. But I'm not gonna try to bikeshed one syntax decision made over a decade ago that has relatively minor consequences for other parts of the language. The early Rust core team had different taste than I do essentially, and that's fine.
kstrauser 18 hours ago [-]
> In general, using english words consisting of a-z is easier to read.

I’ve seen COBOL in the wild. No thanks.

But also, imagine reading a math proof written in English words. That just doesn’t work well.

iknowstuff 19 hours ago [-]
uuuh I like the pipes even though its my first language with them?

Concise and much clearer to read vs parentheses where you gotta wonder if the params are just arguments, or a tuple, etc. What are you talking about.

WD-42 20 hours ago [-]
I’ve been writing python professionally for over 10 years. In the last year I’ve been writing more and most Rust. At first I thought the same as you. It’s a fugly language, there’s no denying it. But once I started to learn what all the weird syntax was for, it began to ruin Python for me.

Now I begrudge any time I have to go back to python. It feels like its beauty is only skin deep, but the ugly details are right there beneath the surface: prolific duck typing, exceptions as control flow, dynamic attributes. All these now make me uneasy, like I can’t be sure what my code will really do at runtime.

Rust is ugly but it’s telling you exactly what it will do.

ActorNightly 17 hours ago [-]
>Now I begrudge any time I have to go back to python. It feels like its beauty is only skin deep, but the ugly details are right there beneath the surface: prolific duck typing, exceptions as control flow, dynamic attributes. All these now make me uneasy, like I can’t be sure what my code will really do at runtime.

I feel like this sentiment is from people who haven't really took the time to fully see what the Python ecosystem is.

Any language can have shittly written code. However languages that by default disallow it means that you have to spend extra time prototyping things, whereas in Python, you can often make things work without much issue. Dynamic typing and attributes make the language very flexible and easily adaptable.

WD-42 17 hours ago [-]
Oh I’m familiar with the ecosystem. Yes the dynamic nature does make it easy to prototype things flexibly. The problem is when your coworker, or you, decide to flexibly and dynamically get the job on a Friday before a long weekend and then 3 months later you need to figure out how a variable is being set, or where a method is being called.
ActorNightly 16 hours ago [-]
And thats no different than writing Rust with a bunch of unsafes, and a bunch of indirection as far as processing flow goes.

The nice thing about Python is that it allows you to do either. And naturally, Python has gotten much faster, to the point where its as fast as Java for some things, because when you don't use dynamic typing, it actually recognizes this and optimizes compiled code without having to carry that type information around.

WD-42 16 hours ago [-]
It’s not the same at all. In Rust you cannot just throw an attribute on to a struct in the middle of a function because it makes some call further down the chain easier, no matter how much unsafe you use.

I’m not a python hater, you can’t get some great stuff done with it quickly. But my confidence in writing large complex systems in it is waning.

ActorNightly 11 hours ago [-]
I can make an argument that you can never have a memory leak in Python, while you can in Rust if you use unsafe.

In the end, both languages allow you to write bad code. But having something that is less strict if you wanna be makes it more flexible.

morshu9001 17 hours ago [-]
JS ruined Python for me, cause it serves similar purposes but handles them better. Rust is a different thing, it ruined C and C++ for me.
mixmastamyk 18 hours ago [-]
Those complaints have very little to do with the syntax.
shmolyneaux 21 hours ago [-]
I would encourage you to give it a try anyways. Unfamiliar syntax is off-putting for sure, but you can get comfortable with any syntax.

Coming from Python, I needed to work on some legacy Perl code. Perl code looks quite rough to a new user. After time, I got used to it. The syntax becomes a lot less relevant as you spend more time with the language.

brettermeier 20 hours ago [-]
Sure... but you don't want to spend time if it's such a mess to read it.
pohl 20 hours ago [-]
Once one does spend some time to become comfortable with the language, that feeling of messiness with unfamiliar syntax fades away. That's the case with any unfamiliar language, not just Rust.
morshu9001 20 hours ago [-]
I used Rust for a year and still wasn't used to the syntax, though this was v1.0 so idk what changed. I see why it's so complicated and would definitely prefer it over C or Cpp, but wouldn't do higher-level code in it.
stingraycharles 21 hours ago [-]
Seems like a fairly decent syntax. It’s less simple than many systems languages because it has a very strong type system. That’s a choice of preference in how you want to solve a problem.

I don’t think the memory safety guarantees of Rust could be expressed in the syntax of a language like C or Go.

everybodyknows 20 hours ago [-]
I code mostly in Go and the typing sloppiness is a major pain point.

Example: You read the expression "x.f", say, in the output of git-diff. Is x a struct object, or a pointer to a struct? Only by referring to enclosing context can you know for sure.

mbel 20 hours ago [-]
> It’s less simple than many systems languages because it has a very strong type system.

I don’t think that’s the case, somehow most ML derived languages ended up with stronger type system and cleaner syntax.

stingraycharles 19 hours ago [-]
Is ML a systems language? Sorry, maybe my definition is wrong, but I consider a systems language something that’s used by a decent amount of OS’es, programming languages and OS utilities.

I assume you’re talking about OCaml et al? I’m intruiged by it, but I’m coming from a Haskell/C++ background.

Rust is somewhat unique in terms of system language this because it’s the first one that’s not “simple” like C but still used for systems tools, more than Go is as far as I’m aware.

Which probably has to do with its performance characteristics being close to the machine, which Go cannot do (ie based on LLVM, no GC, etc)

steveklabnik 20 hours ago [-]
Rust's most complained about syntax, the lifetime syntax, was borrowed from an ML: OCaml.
dontlaugh 19 hours ago [-]
There is no other ML-like that is as low level. Except perhaps ATS, which has terrible syntax.
yoyohello13 20 hours ago [-]
One of the design goals of rust is explicitness. I think if Rust had type elision, like many other functional languages, it would go a long way to cleaning up the syntax.
jjice 20 hours ago [-]
Maybe I've Stockholm'd myself, but I think Rust's syntax is very pleasant. I also think a lot of C code looks very good (although there is some _ugly_ C code out there).

Sometimes the different sets of angle and curly brackets adding up can look ugly at first, and maybe the anonymous function syntax of || {}, but it grows on you if you spend some time with the language (as do all syntaxes, in my experience).

debo_ 21 hours ago [-]
The family of languages that started with ML[0] mostly look like this. Studying that language family will probably help you feel much more at home in Rust.

Many features and stylistic choices from ML derivatives have made their way into Swift, Typescript, and other non-ML languages.

I often say that if you want to be a career programmer, it is a good idea to deeply learn one Lisp-type language (which will help with stuff like Python), one ML-type language (which will help with stuff like Rust) and one C-type language (for obvious reasons.)

[0] https://en.wikipedia.org/wiki/ML_(programming_language)

DeathArrow 20 hours ago [-]
F# looks nothing like Rust. Is much more readable for me.
Daishiman 18 hours ago [-]
F#'s semantics don't describe memory management and lifetimes to the degree that Rust does.
creata 20 hours ago [-]
I think this is subjective, because I think Rust's syntax is (mostly) beautiful.

Given the constraint that they had to keep it familiar to C++ people, I'd say they did a wonderful job. It's like C++ meets OCaml.

Do you have any particular complaints about the syntax?

sorcercode 20 hours ago [-]
Kotlin programmer here who is picking up Rust recently. you're right, it's no Kotlin when it comes to the elegance of APIs but it's also not too bad at all.

In fact there are some things about the syntax that are actually nice like range syntax, Unit type being (), match expressions, super explicit types, how mutability is represented etc.

I'd argue it's the most similar system level language to Kotlin I've encountered. I encourage you to power through that initial discomfort because in the process it does unlock a level of performance other languages dream of.

whimsicalism 20 hours ago [-]
I don’t program much in Rust, but I find it a beautiful syntax… they took C++ and made it pretty much strictly better along with taking some inspiration from ML (which is beautiful imo)
cb321 18 hours ago [-]
You might enjoy https://nim-lang.org/ which has a Python-like syntax with even more flexibility really (UFCS, command-like calls, `fooTemplate: stuff` like user-defined "statements", user-defined operators, term-rewriting macros and more. With ARC it's really just about as safe as Rust and most of the stdlib is fast by default. "High quality" is kind of subjective, but they are often very welcoming of PRs.

Anyway, to your point, I think a newbie could pick up the basics quickly and later learn more advanced things. In terms of speed, like 3 different times I've compared some Nim impl to a Rust impl and the Nim was faster (though "at the extreme" speed is always more a measure of how much optimization effort has been applied, esp. if the language supports inline assembly).

https://cython.org/ , which is a gradually typed variant of Python that compiles to C, is another decent possibility.

fainpul 21 hours ago [-]
> Is there a "Kotlin for Rust"?

While it's not a systems language, have you tried Swift?

tsimionescu 20 hours ago [-]
Swift is as relevant to this discussion as Common Lisp.
frizlab 20 hours ago [-]
On the contrary, Swift is very relevant on this subject. It has high feature parity with rust, with a much readable syntax.
tsimionescu 17 hours ago [-]
It doesn't have the single feature that anyone cares about in Rust - compiler-enforced ownership semantics. And it's not in any way a system-level language (you couldn't use it without its stdlib for example, like in the Linux kernel).

The other features it shares with Rust are also shared by many other languages.

frizlab 12 hours ago [-]
Compiler-enforced ownership semantics is now a part of Swift with non-copyable types. In all honesty I do not know enough of rust to know how on-par the features are, but there is something.

Not sure about using Swift in a kernel as I’m not low-level enough to know that either, but you can indeed use Swift on embedded systems[1].

[1] https://www.swift.org/get-started/embedded/

ModernMech 20 hours ago [-]
But Swift is not "Kotlin for Rust" though, I can't see the connection at all. "Kotlin for Rust" would be a language that keeps you in the Rust ecosystem.
fainpul 20 hours ago [-]
The commenter I replied to seems to like Kotlin. Swift is extremely close to Kotlin in syntax and features, but is not for the JVM. Swift also has a lot of similarities with Rust, if you ignore the fact that it has a garbage collector.
adastra22 18 hours ago [-]
A Kotlin for Rust would be a drop-in replacement where you could have a crate or even just a module written in this hypothetical language and it just works. No bridging or FFI. That’s not Swift.
fainpul 4 hours ago [-]
You all need to stop with the hair-splitting – it's tiresome.

My intention was to offer something that might be of interest to the person I replied to – not to write the official definition of "Kotlin for Rust" which everybody has to agree to. If you think my answer is nonsense, just skip it and read the next one. No need to reply. Nobody profits from this discourse.

frizlab 20 hours ago [-]
Ah yeah ok, makes sense in that way
20 hours ago [-]
kace91 20 hours ago [-]
Have you considered that part of it is not the language but the users?

I'm learning rust and the sample code I frequently find is... cryptically terse. But the (unidiomatic, amateurish) code I write ironically reads a lot better.

I think rust attracts a former c/c++ audience, which then bring the customs of that language here. Something as simple as your variable naming (character vs c, index vs i) can reduce issues already.

SquibblesRedux 20 hours ago [-]
As an official greybeard who has written much in C, C++, Perl, Python, and now Rust, I can say Rust is a wonderful systems programming language. Nothing at all like Perl, and as others have mentioned, a great relief from C++ while providing all the power and low-level bits and bobs important for systems programming.
JuniperMesos 18 hours ago [-]
I prefer Rust syntax to Python's purely on the grounds that Rust is a curly-brace language and Python is an indentation-sensitive language. I like it when the start and end of scopes in code are overtly marked with a non-whitespace character, it reduces the chances of bugs caused by getting confused about what lines of code are in what scope and makes it easier to use text editor tools to move around between scopes.

Beyond that issue, yeah most of Rust's syntactic noise comes from the fact that it is trying to represent genuinely complicated abstractions to support statically-checked memory safety. Any language with a garbage collector doesn't need a big chunk of Rust's syntax.

mixmastamyk 18 hours ago [-]
The rust syntax problem is not the braces but about all the #[()] || ::<<>> ‘stuff.
metaltyphoon 16 hours ago [-]
Attribute, lambda and turbofish?
raincole 21 hours ago [-]
> Why are we pushing forward with a language that has a Perl-esque unreadability...?

The reason is the same for any (including Perl, except those meme languages where obfuscation is a feature) language: the early adopters don't think it's unreadable.

dev_l1x_be 20 hours ago [-]
I would argue that anything that is not Lisp has a complicated syntax.

The question is: is it worth it?

With Rust for the answer is yes. The reliability, speed, data-race free nature of the code I get from Rust absolutely justifies the syntax quirks (for me!).

bicarbonato 20 hours ago [-]
What do people actually mean when they say "the syntax is janky"?

I often see comparisons to languages like Python and Kotlin, but both encode far less information on their syntax because they don't have the same features as Rust, so there's no way for them to express the same semantics as rust.

Sure, you can make Rust look simpler by removing information, but at that point you're not just changing syntax, you're changing the language's semantics.

Is there any language that preserves the same level of type information while using a less "janky" syntax?

nixpulvis 20 hours ago [-]
Aside from async/await which I agree is somewhat janky syntaxtically, I'm curious what you consider to be janky. I think Rust is overall pretty nice to read and write. Patterns show up where you want them, type inference is somewhat limited but still useful. Literals are readily available. UFCS is really elegant. I could go on.

Ironically, I find Python syntax frustrating. Imports and list comprehensions read half backwards, variable bindings escape scope, dunder functions, doc comments inside the function, etc.

e12e 19 hours ago [-]
> It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years.

Maybe Ada, D or Nim might qualify?

dimgl 20 hours ago [-]
> Every time I consider learning Rust, I am thrown back by how... "janky" the syntax is. It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years.

I said this years ago and I was basically told "skill issue". It's unreadable. I shudder to think what it's like to maintain a Rust system at scale.

JuniperMesos 18 hours ago [-]
The syntax has relatively little to do with how easy or hard it is to maintain a rust system at scale. If you get something wrong the compiler will alert you, and most of the syntax is there for good reasons that anyone maintaining any kind of software system at scale needs to understand (and indeed the syntax helps you be clear about what you mean to the compiler, which facilitates helpful compiler error messages if you screw something up when modifying code).
yoyohello13 20 hours ago [-]
You get used to it. Like any language.
20 hours ago [-]
short_sells_poo 21 hours ago [-]
I'm writing this as a heavy python user in my day job. Python is terrible for writing complex systems in. Both the language and the libraries are full of footguns for the novice and expert alike. It has 20 years of baggage, the packaging and environment handling is nothing short of an unmitigated disaster, although uv seems to be a minor light at the end of the tunnel. It is not a simple language at this point. It has had so many features tacked on, that it needs years of use to have a solid understanding of all the interactions.

Python is a language that became successful not because it was the best in it's class, but because it was the least bad. It became the lingua franca of quantitative analysis, because R was even worse and matlab was a closed ecosystem with strong whiffs of the 80s. It became successful because it was the least bad glue language for getting up and running with ML and later on LLMs.

In comparison, Rust is a very predictable and robust language. The tradeoff it makes is that it buys safety for the price of higher upfront complexity. I'd never use Rust to do research in. It'd be an exercise in frustration. However, for writing reliable and robust systems, it's the least bad currently.

pxc 20 hours ago [-]
What's wrong with R? I used it and liked it in undergrad. I certainly didn't use it as seriously as the users who made Python popular, but to this day I remember R fondly and would never choose Python for a personal project.

My R use was self-taught, as well. I refused to use proprietary software for school all through high school and university, so I used R where we were expected to use Excel or MatLab (though I usually used GNU Octave for the latter), including for at least one or two math classes. I don't remember anything being tricky or difficult to work with.

SoftTalker 20 hours ago [-]
R is the most haphazard programming environment I've ever used. It feels like an agglomeration of hundreds of different people's shell aliases and scripting one-liners.

I'll grant my only exposure has been a two- or three-day "Intro to R" class but I ran screaming from that experience and have never touched it again.

It maybe worked against me that I am a programmer, not a statistician or researcher.

pxc 17 hours ago [-]
When I used it I was a computer science student. But I wasn't reading anyone else's code or trying to maintain anything complex, which is why I asked what I did. I'm sure there are quirks I never had to deal with.

So is it just that the stdlib is really big and messy?

forgotpwd16 20 hours ago [-]
Python had already become vastly popular before ML/AI. Scripting/tools/apps/web/... Only space that hasn't entered is mobile.
hedora 20 hours ago [-]
The sigils in Rust (and perl) are there to aid readability. After you use it a bit, you get used to ignoring them unless they look weird.

All the python programs I've had to maintain (I never choose python) have had major maintainability problems due to python's clean looking syntax. I can still look at crazy object oriented perl meta-programming stuff I wrote 20 years ago, and figure out what it's doing.

Golang takes another approach: They impoverished the language until it didn't need fancy syntax to be unambiguously readable. As a workaround, they heavily rely on codegen, so (for instance) Kubernetes is around 2 million lines of code. The lines are mostly readable (even the machine generated ones), but no human is going to be able to read them at the rate they churn.

Anyway, pick your poison, I guess, but there's a reason Rust attracts experienced systems programmers.

tcfhgj 21 hours ago [-]
what makes it unreadable for you?
forgotpwd16 21 hours ago [-]
Legit question really. A comparative study on language readability using codes doing the same thing written idiomatically in different languages will be interesting. Beyond syntax, idioms/paradigm/familiarity should also play role.
thousand_nights 20 hours ago [-]
nta you're replying to, but as someone who doesn't know rust, on first glance it seems like it's littered with too many special symbols and very verbose. as i understand it this is required because of the very granular low level control rust offers

maybe unreadable is too strong of a word, but there is a valid point of it looking unapproachable to someone new

SAI_Peregrinus 20 hours ago [-]
I think the main issue people who don't like the syntax have with it is that it's dense. We can imagine a much less dense syntax that preserves the same semantics, but IMO it'd be far worse.

Using matklad's first example from his article on how the issue is more the semantics[1]

    pub fn read<P: AsRef<Path>>(path: P) -> io::Result<Vec<u8>> {
      fn inner(path: &Path) -> io::Result<Vec<u8>> {
        let mut file = File::open(path)?;
        let mut bytes = Vec::new();
        file.read_to_end(&mut bytes)?;
        Ok(bytes)
      }
      inner(path.as_ref())
    }
we can imagine a much less symbol-heavy syntax inspired by POSIX shell, FORTH, & ADA:

    generic
        type P is Path containedBy AsRef
    public function read takes type Path named path returns u8 containedBy Vector containedBy Result fromModule io
      function inner takes type reference to Path named path returns u8 containedBy Vector containedBy Result fromModule io
        try
            let mutable file = path open fromModule File 
        let mutable bytes = new fromModule Vector
        try
            mutable reference to bytes file.read_to_end
        bytes Ok return
      noitcnuf
      path as_ref inner return
    noitcnuf
and I think we'll all agree that's much less readable even though the only punctuation is `=` and `.`. So "symbol heavy" isn't a root cause of the confusion, it's trivial to make worse syntax with fewer symbols. And I like RPN syntax & FORTH.

[1] https://matklad.github.io/2023/01/26/rusts-ugly-syntax.html

DenisM 18 hours ago [-]
That might be an interesting extension to a dev environment or git - convert terse rust into semi-verbose explanation.

Sort of like training wheels, eventually you stop using it.

yoyohello13 20 hours ago [-]
People often misuse unreadable when they mean unfamiliar. Rust really isn't that difficult to read when you get used to it.
kasabali 18 hours ago [-]
Chinese isn't that difficult to read when you get used to it, too.
ben-schaaf 16 hours ago [-]
> littered with too many special symbols and very verbose

This seems kinda self-contracticting. Special symbols are there to make the syntax terse, not verbose. Perhaps your issue is not with how things are written, but that there's a lot of information for something that seems simpler. In other words a lot of semantic complexity, rather than an issue with syntax.

21 hours ago [-]
ModernMech 20 hours ago [-]
> upon the learnings of the past 20+ years.

That's the thing though... Rust does build on many of those learnings. For starters, managing a big type system is better when some types are implicit, so Rust features type inference to ease the burden in that area. They've also learned from C++'s mistake of having a context sensitive grammar. They learned from C++'s template nightmare error messages so generics are easier to work with. They also applied learnings about immutability being a better default that mutability. The reason Rust is statically linked and packages are managed by a central repository is based on decades of seeing how difficult it is to build and deploy projects in C++, and how easy it is to build and deploy projects in the Node / NPM ecosystem. Pattern matching and tagged unions were added because of how well they worked in functional languages.

As for "Perl-esque unreadability" I submit that it's not unreadable, you are just unfamiliar. I myself find Chinese unreadable, but that doesn't mean Chinese is unreadable.

> Is there a "Kotlin for Rust"?

Kotlin came out 16 years after Java. Rust is relatively new, and it has built on other languages, but it's not the end point. Languages will be written that build on Rust, but that will take some time. Already many nascent projects are out there, but it is yet to be seen which will rise to the top.

maximilianburke 20 hours ago [-]
In your opinion how does Rust compare to C++ for readability?
adastra22 18 hours ago [-]
C++ is vastly more readable. I will never go back to writing or maintaining C++ projects, but drop me into a C++ file to review something and it is usually very easy to grok.

Part of this is style and conventions though. I have implemented an STL container before, and that templating hell is far worse than anything I’ve ever seen in the Rust ecosystem. But someone following modern C++ conventions (e.g. a Google library) produces very clean and readable code.

maximilianburke 17 hours ago [-]
How do you handle understanding the semantics in the presence of custom overloaded operators?
adastra22 14 hours ago [-]
That's not what "readability" generally means. Yes, Rust's semantics are more tightly locked down, and that's a big part of why I use it. But given two well-written source files, one in modern C++ and one in contemporary Rust, can you quickly skim and understand what each one is supposed to be doing, irregardless of bugs that might be lurking? If you made me guess right now which one I'd have an easier time understanding, I'd guess the C++ file.
teunispeters 13 hours ago [-]
Honestly, rust is slightly more readable than obfuscated perl. I think I prefer K&R C, and I don't like K&R C. In terms of readability, maybe equivalent to early Win32 API? [3000 lines to set up API, then call to activate].

With C++ you have a range of readable from - easy and very approachable - to 2000s era Microsoft STL. (where not only is it close to unreadable, many many hidden bugs are ... somewhere. And behaviour is not consistent).

I will admit I don't find Rust quite as unreadable as the 2000s era Microsoft STL, but the latter's one of the things that pushed me far more into Linux dev.

Rust is the kind of language that would push me to write a new language that isn't rust. And maybe work on supporting all the zillion platforms rust doesn't support. Or maybe just stick to C. I'm not a fan, but there are worse, and yeah - there are some C++ libraries out there that are worse. Lots that are better, too, eg llvm source.

maximilianburke 13 hours ago [-]
How much Rust have you written?
yoyohello13 20 hours ago [-]
> It seems to me that we ought to have a system-level language which builds upon the learnings of the past 20+ years

I mean, Rust does. I builds on 20+ years of compiler and type system advancements. Then Syntax is verbose if you include all then things you can possibly do. If you stick to the basics it's pretty similar to most other languages. Hell, I'd say a lot of syntax Rust is similar to type-hinted Python.

Having said that, comparing a GC'd dynamic language to a systems programming language just isn't a fair comparison. When you need to be concerned about memory allocation you just need more syntax.

EugeneOZ 20 hours ago [-]
Does it really add any value to the conversation?
pessimizer 18 hours ago [-]
I think the problem for people is the traditional problem that a lot of people have had with a lot of languages since C took off: it doesn't look like ALGOL, and doesn't have the semantics of ALGOL.

The reason python looks like procedural pseudocode is because it was designed to look like procedural pseudocode. Rust is not just a new skin over ALGOL with a few opinionated idioms, it's actually different - mostly to give hints of intention to the compiler, but I don't even think it started from the same place. It's more functional than anything imo, but without caring about purity or appearance, and that resulted in something that superficially looks sufficiently ALGOL-like to confuse people who are used to python or Kotlin.

> I genuinely believe that languages ought to start with "newbie friendliness", and would love to hear challenges to that idea.

In conclusion, I think this is a red herring. Computer languages are hard. What you're actually looking for is something that is ALGOL-like for people who have already done the hard work of learning an ALGOL-like. That's not a newbie, though. Somebody who learned Rust first would make the same complaint about python.

paulddraper 19 hours ago [-]
Perl’s most notable syntax feature is sigils on all variables.

So it’s strange to hear a comparison. Maybe there’s something I’m missing.

It seems closer to C++ syntax than Perl.

simonask 21 hours ago [-]
What are you talking about? Rust’s function signature and type declaration syntaxes are extremely vanilla, unless you venture into some really extreme use cases with lots of lifetime annotations and generic bounds.

I seriously don’t get it.

    fn add(a: i32, b: i32) -> i32 { … }
Where’s the “Perl-esqueness”?
simonw 20 hours ago [-]

  trait Handler {
    fn handle<'a>(&self, input: &'a str) -> Result<&'a str, HandlerError>;
  }

  fn process_handler<'a>(
    handler: Box<dyn Handler + 'a>,
    input: &'a str,
  ) -> Result<&'a str, HandlerError> {
    handler.handle(input)
  }
creata 20 hours ago [-]
That's just a weird and unrealistic example, though. Like, why is process_handler taking an owned, boxed reference to something it only needs shared access to? Why is there an unnecessary 'a bound on handler?

In the places where you need to add lifetime annotations, it's certainly useful to be able to see them in the types, rather than relegate them to the documentation like in C++; cf. all the places where C++'s STL has to mention iterator and reference invalidation.

WD-42 19 hours ago [-]
LLMs LOVE to write Rust like this. They add smart pointers, options and lifetimes everywhere when none of those things are necessary. I don’t know what it is, but they love over-engineering it.
nrds 16 hours ago [-]
As a first guess, they're trained on lots of social media and Q&A content. The former has lots of complaints about "look how complex rust is!" while the latter has lots of "help I've written very complex rust".
steveklabnik 19 hours ago [-]
I agree that the signature for process_handler is weird, but you could steelman it to take a borrowed trait object instead, which would have an extra sigil.

The handler function isn't actually unnecessary, or at least, it isn't superfluous: by default, the signature would include 'a on self as well, and that's probably not what you actually want.

I do think that the example basically boils down to the lifetime syntax though, and yes, while it's a bit odd at first, every other thing that was tried was worse.

creata 19 hours ago [-]
> The handler function isn't actually unnecessary, or at least, it isn't superfluous: by default, the signature would include 'a on self as well, and that's probably not what you actually want.

To clarify, I meant the 'a in `Box<dyn Handler + 'a>` in the definition of `process_handler` is unnecessary. I'm not saying that the <'a> parameter in the definition of Handler::handle is unnecessary, which seems to be what you think I said, unless I misunderstood.

steveklabnik 19 hours ago [-]
Ah yes, I misunderstood you in exactly that way, my apologies.
WD-42 20 hours ago [-]
Lifetimes really only come into play if you are doing something really obscure. Often times when I’m about to add lifetimes to my code I re-think it and realize there is a better way to architect it that doesn’t involve them at all. They are a warning sign.
tcfhgj 20 hours ago [-]
now show me an alternative syntax encoding the same information
19 hours ago [-]
bgwalter 20 hours ago [-]
...
steveklabnik 20 hours ago [-]
There's a deeper connection there: lifetimes are a form of type variable, just like in OCaml.
19 hours ago [-]
quux0r 20 hours ago [-]
While I don’t disagree that this is at first blush quite complex, using it as an example also obscures a few additional details that aren’t present in something like python, namely monads and lifetimes. I think in absence of these, this code is a bit easier to read. However, if you had prior exposure to these concepts, I think that this is more approachable. I guess what I’m getting at here is that rust doesn’t seem to be syntactic spaghetti as much as it is a confluence of several lesser-used concepts not typically used in other “simpler” languages.
simonask 20 hours ago [-]
> > really extreme use cases with lots of lifetime annotations and generic bounds

You choose as your example a pretty advanced use case.

simonw 16 hours ago [-]
Yeah, because if you exclude the bits that make Rust look like Perl then it won't look like Perl!
nullpoint420 20 hours ago [-]
Which is the exact use case someone would choose rust for over other languages
creata 20 hours ago [-]
No, the use cases of Rust are pretty much the same as the use cases of C++. Most Rust code shouldn't have objects with complicated lifetimes, just like most code in any language should avoid objects with complicated lifetimes.
piva00 20 hours ago [-]
Could have thrown a few uses of macros with the # and ! which threw me off completely while trying to read a Rust codebase as a non-Rust programmer.
haolez 20 hours ago [-]
That's simple even in Perl. The problem is when you start adding the expected idioms for real world problems.
nawgz 20 hours ago [-]
Python users don’t even believe in enabling cursory type checking, their language design is surpassed even by JavaScript, should it really even be mentioned in a language comparison? It is a tool for ML, nothing else in that language is good or worthwhile
gorgoiler 20 hours ago [-]
”[One] major contributor to APT suggested it would be better to remove the Rust code entirely as it is only needed by Canonical for its Launchpad platform. If it were taken out of the main APT code base, then it would not matter whether they were written in Rust, Python, or another language, since the tools are not directly necessary [for regular installations].”

Given the abundance of the hundreds of deb-* and dh-* tools across different packages, it is surprising that apt isn’t more actively split into separate, independent tools. Or maybe it is, but they are all in a monorepo, and the debate is about how if one niche part of the monorepo uses Rust then the whole suite can only be built on platforms that support Rust?

  #!/bin/sh
  build_core
  if has_rust
  then
    build_launchpad_utils
  fi
It’s like arguing about the bike shed when everyone takes the bus except for one guy who cycles in every four weeks to clean the windows.
rrmm 19 hours ago [-]
If this could be done it seems like the ideal compromise. Everyone gets what they want.

That said eventually more modern languages will be dependencies of the tools one way or another (and they should). So probably Debian as a whole should come to a consensus on how that should happen, so it can happen in some sort of standard and fair fashion.

GuB-42 20 hours ago [-]
Shouldn't we wait until Rust gets full support in GCC? This should resolve the issue with ports without a working Rust compiler.

I don't have a problem with Rust, it is just a language, but it doesn't seem to play along well with the mostly C/C++ based UNIX ecosystem, particularly when it comes to dependencies and package management. C and C++ don't have one, and often rely on system-wide dynamic libraries, while Rust has cargo, which promotes large dependency graphs of small libraries, and static linking.

theoldgreybeard 20 hours ago [-]
Rust developers are so dogmatic about their way being the best and only way that I just avoid it altogether. I've had people ask about Rust in issues/discussions in small hobby projects I released as open source - I just ban them immediately because there is no reasoning with them and they never give up. Open source terrorists.
aeve890 14 hours ago [-]
Username checks out.

>Open source terrorists

Rust users and evangelism rubs me the wrong way. Yes it's safe, yes it's ergonomic but why the weird aura around the people that insist in "Rewrite it in Rust, Deus Vult"? It eludes me.

tucnak 19 hours ago [-]
"Open source terrorism" is a hilarious designation for Rust-like traditions and customs. I wonder what other programming language/software communities may fall under this definition?
petcat 21 hours ago [-]
Interesting how instead of embracing Rust as a required toolchain for APT, the conversation quickly devolved into

"why don't we just build a tool that can translate memory-safe Rust code into memory-unsafe C code? Then we don't have to do anything."

This feels like swimming upstream just for spite.

forgotpwd16 21 hours ago [-]
>tool that can translate memory-safe Rust code into memory-unsafe C code

Fwiw, there're two such ongoing efforts. One[1] being an, written in C++, alternative Rust compiler that emits C (aka, in project's words, high-level assembly), the other[2] being a Rust compiler backend/plugin (as an extra goal to its initial being to compile Rust to CLR asm). Last one apparently is[3] quite modular and could be adapted for other targets too. Other options are continuing/improve GCC front-end for Rust and a recent attempt to make a Rust compiler in C[4] that compiles to QBE IR which can then be compiled with QBE/cc.

[1]: https://github.com/thepowersgang/mrustc [2]: https://github.com/FractalFir/rustc_codegen_clr [3]: https://old.reddit.com/r/rust/comments/1bhajzp/ [4]: https://codeberg.org/notgull/dozer

pabs3 9 hours ago [-]
Also there is a rustc codegen backend that uses GCC, so you can just skip the C part:

https://github.com/rust-lang/rustc_codegen_gcc

pbohun 20 hours ago [-]
That's not what the comment said. It said, "How about a Rust to C converter?..." The idea was that using a converter could eliminate the problem of not having a rust compiler for certain platforms.
epolanski 21 hours ago [-]
The problem is that rust is being shoved in pointless places with a rewrite-everything-in-rust mentality.

There's lunatics that want to replace basic Unix tools like sudo, etc, that are battle tested since ages which has been a mess of bugs till now.

Instead Rust should find it's niches beyond rewriting what works, but tackling what doesn't.

saghm 21 hours ago [-]
FWIW sudo has been maintained by an OpenBSD developer for a while now but got replaced in the base system by doas. Independent of any concerns about Rust versus C, I don't think it's quite as unreasonable as you're claiming to consider alternatives to sudo given that the OS that maintains it felt that it was flawed enough to be worth writing a replacement for from scratch.
SoftTalker 19 hours ago [-]
sudo had grown a lot of features and a complicated config syntax over the years, which ended up being confusing and rarely needed in practice. doas is a lot simpler. It wasn't just a rewrite of a flawed utility but a simplification of it.
saghm 18 hours ago [-]
Regardless of the exact terminology used to describe why it was done, my point is that assuming that people are "lunatics" because they want to replace sudo is not a particularly compelling claim, and that's what the comment I was responding to had said.
anarki8 20 hours ago [-]
> The problem is that rust is being shoved in pointless places with a rewrite-everything-in-rust mentality.

> There's lunatics ...

I think the problem is people calling developers "lunatics" and telling them which languages they must use and which software they must not rewrite.

Battle tested is not bulletproof: https://cybersecuritynews.com/sudo-linux-vulnerability/

Applying strict compile time rules makes software better. And with time it will also become battle tested.

epolanski 18 hours ago [-]
My point had nothing to do with languages.

My point is against rewrites of critical software for the point of rewriting it *insert my favorite language*. Zig is also a safer language than C, so are many other alternatives, yet the Zig community is not obsessed in rewriting old software but writing new one. And the Zig compiler has excellent C interop (in fact it can compile C/C++), yet the community is more focused in writing new software.

There are many factors that make software reliable, it's not just a matter of pretty types and memory safety, there's factors like platform/language stability, skill and expertise of the authors, development speed and feedback.

eviks 6 hours ago [-]
> My point is against rewrites of critical software for the point of rewriting it

Because you're replacing a real point with a made up one, the reason for rewriting is to get the critical benefits for critical software, which battle testing has shown can't be had in the current language, not "my favorite"

> Zig community is not obsessed

They don't even have a 1.0 language? You're also ignoring the critical difference in the level of safety

anarki8 13 hours ago [-]
> My point is against rewrites of critical software for the point of rewriting it insert my favorite language.

In this specific case we are talking about the maintainer adding a new language into the existing codebase.

I think refactoring parts of the software in the new language is what you call "rewrite" here, correct?

So what improvements does it bring? You actually answered it yourself:

> it's not just a matter of pretty types and memory safety

So indeed, stricter/stronger type system and additional automatic compile time and runtime checks are a major improvement.

> platform

As already mentioned in this thread: neither of the platforms lacking Rust were supported officially anyways.

> language stability

Rust is extremely stable and backwards compatible - 1.0 code still compiles without any issues on 1.90 and will continue to do so for the forseeable future.

> skill and expertise of the authors

The same developers continuing to contribute and newcoming developers have more checks in place to prevent bugs.

> development speed

I guess you imply here that developing in C++ is faster. It's in fact not if your aim is to produce correct software. There are so many more things to keep in mind and take care of with C++, you have fewer automatic checks being done by the compiler and the type system.

About Zig: it's a nice language and much more comfortable to use than C/C++ IMO, but compared to Rust it lacks in strictness and safety, so added benefits are smaller and fewer if you put away subjective preferences.

mmastrac 21 hours ago [-]
sudo is not fully battle tested, even today. You just don't really see the CVEs getting press.

https://www.oligo.security/blog/new-sudo-vulnerabilities-cve...

21 hours ago [-]
jamespo 20 hours ago [-]
Neither of those vulnerabilities look like rust would necessarily help however
metaltyphoon 17 hours ago [-]
Thats not the point OP was mentioning as in “battle tested” doesn’t mean free of bugs.
marcosdumay 20 hours ago [-]
Cue for all those battle tested programs that people keep finding vulnerabilities several decades after they got considered "done". You should try looking at the test results once in a while.

And by the way, we had to replace almost all of the basic Unix tools at the turn of the century because they were completely unfit for purpose. There aren't many left.

dralley 21 hours ago [-]
Converting parsers to Rust is not "pointless". Doing string manipulation in C is both an awful experience and also extremely fertile ground for serious issues.
donkeybeer 21 hours ago [-]
apt is C++
dboon 20 hours ago [-]
It’s very easy to write a string library in C which makes string operations high level (both in API and memory management). Sure, you shouldn’t HAVE to do this. I get it. But anyone writing a parser is definitely skilled enough to maintain a couple hundred lines of code for a linear allocator and a pointer plus length string. And to be frank, doing things like “string operations but cheaply allocated” is something you have to do ANYWAY if you’re writing e.g. a parser.

This holds for many things in C

steveklabnik 20 hours ago [-]
This is just a variation of the "skill issue" argument.

If it were correct, we wouldn't see these issues continue to pop up. But we do.

josefx 17 hours ago [-]
I think it is more a matter of convenience. There are countless string implementations for C, some tiny projects, others part of larger frameworks like Glib. At the end of the day a C developer has to decide if they are going to pull in half of gnome to handle a few lines of IO or if they are just going to use the functions the C standard conveniently ships with. Most people are going to do the later.
criddell 20 hours ago [-]
> a pointer plus length

What would length represent? Bytes? Code points?

Anyway, I think what you are asking for already exists in the excellent ICU library.

And it's not a very easy thing to maintain. Unicode stuff changes more often than you might think and it can be political.

epolanski 21 hours ago [-]
Issues that are battle tested from ages.
Cthulhu_ 21 hours ago [-]
Sure, which is highly valuable information that hopefully made its way into a testing / verification suite. Which can then be used to rewrite the tool into a memory-safe language, which allows a lot of fixes and edge cases that were added over time to deal with said issues to be refactored out.

Of course there's a risk that new issues are introduced, but again, that depends a lot on the verification suite for the existing tool.

Also, just because someone did a port, doesn't mean it has to be adopted or that it should replace the original. That's open source / the UNIX mentality.

honeycrispy 20 hours ago [-]
Calling it pointless comes across as jaded. It's not pointless.

Supporting Rust attracts contributors, and those contributors are much less likely to introduce vulnerabilities in Rust when contributing vs alternatives.

rrmm 20 hours ago [-]
to introduce certain common vulnerabilities ...

not vulnerabilities in general.

krior 16 hours ago [-]
And seatbelts and airbags do not prevent all harm, yet they are still universally used.
rrmm 14 hours ago [-]
It's a pedantic point admittedly, but I think it's important to be realistic and clear that Rust isn't a panacea.
sidewndr46 21 hours ago [-]
I seem to remember going through this with systemD in Ubuntu. Lots of lessons learned seemed to come back as "didn't we fix this bug 3 years ago?"
genewitch 21 hours ago [-]
We need lisp, cobol, and java in apt, too. and firefox.
VWWHFSfQ 21 hours ago [-]
Is the apt package manager a pointless place? It seems like a pretty foundational piece of supply chain software with a large surface area.
metalforever 20 hours ago [-]
The author of the rust software did not solve the platform problem, as a result it is not a solution. Since it is not a solution, it should be reverted. It's really that simple.
sidewndr46 21 hours ago [-]
All compilers do anyways is translate from one language specification to another. There's nothing magical about Rust or any specific architecture target. The compiler of a "memory safe" language like Rust could easily output assembly with severe issues in the presence of a compiler bug. There's no difference between compiling to assembly vs. C in that regard.
simonask 21 hours ago [-]
The assumption here is that there exists an unambiguous C representation for all LLVM IR bitcode emitted by the Rust compiler.

To my knowledge, this isn’t the case.

andsoitis 21 hours ago [-]
> The assumption here is that there exists an unambiguous C representation for all LLVM IR bitcode emitted by the Rust compiler.

> To my knowledge, this isn’t the case.

Tell us more?

simonask 21 hours ago [-]
Source-to-source translation will be very hard to get right, because lots of things are UB in C that aren’t in Rust, and obviously vice versa.

Rust has unwinding (panics), C doesn’t.

adwn 21 hours ago [-]
For one, signed integer overflow is allowed and well-defined in Rust (the result simply wraps around in release builds), while it's Undefined Behavior in C. This means that the LLVM IR emitted by the Rust compiler for signed integer arithmetic can't be directly translated into the analogous C code, because that would change the semantics of the program. There are ways around this and other issues, but they aren't necessarily simple, efficient, and portable all at once.
pengaru 19 hours ago [-]
You guys seem to be assuming transpiling to C means it must produce C that DTRT on any random C compiler invoked any which way on the other side, where UB is some huge possibility space.

There's nothing preventing it from being some specific invocation of a narrow set of compilers like gcc-only of some specific version range with a set of flags configuring the UB to match what's required. UB doesn't mean non-deterministic, it's simply undefined by the standard and generally defined by the implementation (and often something you can influence w/cli flags).

adwn 18 hours ago [-]
> You guys seem to be assuming transpiling to C means it must produce C that DTRT on any random C compiler invoked any which way on the other side, where UB is some huge possibility space.

Yes, that's exactly what "translating to C" means – as opposed to "translating to the very specific C-dialect spoken by gcc 10.9.3 with patches X, Y, and Z, running on an AMD Zen 4 under Debian 12.1 with glibc 2.38, invoked with flags -O0 -g1 -no-X -with-Y -foo -blah -blub...", and may the gods have mercy if you change any of this!

Do you see the problem?

thesz 17 hours ago [-]
One can do a direct translation from Rust AST/IR to C. Many functional languages do that, C++ started as a compiler to C.
tsimionescu 20 hours ago [-]
The gigantic difference is that assembly language has extremely simple semantics, while C has very complex semantics. Similarly, assembler output is quite predictable, while C compilers are anything but. So the level of match between the Rust code and the machine code you'll get from a Rust-to-assembly compiler will be much, much easier to understand than the match you'll get between the Rust code and the machine code produced by a C compiler compiling C code output by a Rust-to-C transpiler.
maeln 21 hours ago [-]
You know, it is easy to find this kind of nitpicking and seemingly eternal discussion over details exhausting and meaningless, but I do think it is actually a good sign and a consequence of "openness". In politics, authoritarianism tend to show a pretty façade where everyone mostly agrees (the reality be damned), and discussion and dissenting voice are only allowed to a certain extent as a communication tool. This is usually what we see in corporate development.

Free software are much more like democracy, everyone can voice their opinion freely, and it tends to be messy, confrontational, nitpicky. It does often lead to slowing down changes, but it also avoids the common pitfall of authoritarian regime of going head first into a wall at the speed of light.

cogman10 19 hours ago [-]
What?

Opensource software doesn't have 1 governance model and most of it starts out as basically a pure authoritarian run.

It's only as the software ages, grows, and becomes more integral that it switches to more democratic forms of maintenance.

Even then, the most important OS code on the planet, the kernel, is basically a monarchy with King Linus holding absolute authority to veto the decision of any of the Lords. Most stuff is maintained by the Lords but if Linus says "no" or "yes" then there's no parliament which can override his decision (beyond forking the kernel).

leo_e 18 hours ago [-]
As someone fighting the C++ toolchain daily, there is a painful irony in seeing APT—the tool supposed to solve dependency hell—creating its own dependency crisis.

I sympathize with the maintainers of retro hardware. But honestly? Holding back the security and maintainability of a modern OS base layer just so an AlphaStation from 1998 can boot feels backwards.

The transition pain is real, and Canonical handled the communication poorly. But the 'legacy C tax' is eternal. We have to move critical infrastructure off it eventually.

creatonez 19 hours ago [-]
I have never seen a program segfault and crash more than apt. The status quo is extremely bad, and it desperately needs to be revamped in some way. Targeted rewrites in a memory safe & less mistake-prone language sounds like a great way to do that.

If you think this is a random decision caused by hype, cargo culting, or a maintainer's/canonical's mindless whims... please, have a tour through the apt codebase some day. It is a ticking time bomb, way more than you ever imagined such an important project would be.

sherr 19 hours ago [-]
I've been using apt regularly on Debian for a long time and never seen it crash or segfault. Very strange that you do. All software has bugs of course, but apt is so heavily used that I expect it gets attention. It just works for me.
dontlaugh 18 hours ago [-]
I’ve seen in segfault once. I don’t know how common that is, but it would be nice to make that less likely.
jzb 12 hours ago [-]
I’m very very curious to know what it is you’re doing to experience this: I’ve used Debian and its derivatives for 25 years now. On desktops, laptops, and servers. x86, x86-64, and Arm 64. I have never had a segfault with APT. Not a single time. Problems with dependencies or such a few times, but I don’t recall APT ever crashing on me.

Please, share more details.

pmarin 18 hours ago [-]
20 years on Debian. Not a single crash with apt
MisterTea 19 hours ago [-]
""and not be held back by trying to shoehorn modern software on retro computing devices""

Nice. So discrimination of poor users who are running "retro" machines because that is the best they can afford or acquire.

I knew of at least two devs who are stuck with older 32 bit machines as that is what they can afford/obtain. I even offered to ship them a spare laptop with a newer CPU and they said thanks but import duties in their country would be unaffordable. Thankfully they are also tinkering with 9front which has little to no issues with portability and still supports 32 bit.

quux 19 hours ago [-]
Looking at the list of affected architectures: Alpha (alpha), Motorola 680x0 (m68k), PA-RISC (hppa), and SuperH (sh4) I think these are much much more likely to be run by enthusiasts than someone needing an affordable computer.
chc4 19 hours ago [-]
No one is using an Alpha, Motorola 680x0, PA-RISC, or SuperH computer because that's the only thing they can afford. Rust supports 32bit x86.
cogman10 19 hours ago [-]
The last 32bit laptop CPU was produced nearly 20 years ago.

Further, there are still several LTS linux distros (including the likes of Ubuntu and Debian) which don't have the rust requirement and won't until the next LTS. 24.04 is supported until 2029. Meaning you are talking about a 25 year old CPU at that point.

And even if you continue to need support. Debian based distros aren't the only ones on the plant. You can pick something else if it really matters.

yjftsjthsd-h 17 hours ago [-]
> The last 32bit laptop CPU was produced nearly 20 years ago.

15 years max; I can easily find documentation of Intel shipping Atom chips without 64-bit support in 2010, though I haven't found a good citation for when exactly that ended.

cogman10 16 hours ago [-]
https://en.wikichip.org/wiki/intel/microarchitectures/bonnel...

Looks like it was ultimately phased out in 2011.

It was only the first atom uarch that was 32. The next uarch (Saltwell) was 64.

aaronblohowiak 19 hours ago [-]
Rust works fine on 32 bit, (and 16 bit) that’s not what they mean…
quux 19 hours ago [-]
Rust even works on 8-bit via the LLVM-MOS backend for MOS 6502 :)
stingraycharles 19 hours ago [-]
Poor people aren’t running exotic hardware.
miladyincontrol 18 hours ago [-]
Agree, using these architectures isnt related to one's finances and unaffordability of hardware. Using obscure hardware like this for hobbyist reasons is a privilege, and one that rarely demands the latest upstream for everything at that.
crote 16 hours ago [-]
We're basically at a point where running those older machines is more expensive, once you factor in power use.

Even then, people using ancient fifth-hand machines are almost certainly still going to run x86 - which means they'll have no trouble running Rust as 32-bit x86 is a supported target. Their bigger issue is going to be plain old C apps dropping 32-bit support!

"Retro" in this case genuinely means "horribly outdated". We're talking about systems with CPUs in the hundreds of MHz with probably fewer than a gigabyte in memory. You might do some basic word processing using Windows 95, but running anything even remotely resembling a modern OS is completely impossible. And considering their age and rarity, I'd be very impressed if anyone in a poor country managed to get their hands on it.

tucnak 19 hours ago [-]
You seem to be involved with 9front.

Are you trying to suggest there is a nontrivial community of people who cannot afford modern 64-bit Linux platforms, and opt for 9front on some ancient 32-bit hardware instead? Where are they coming from? Don't get me wrong, I love the 9 as much as the next guy, but you seem to paint it as some kind of affordability frontier...

MisterTea 18 hours ago [-]
> Where are they coming from?

One is in lives in Brazil and I think the other lives in the Middle East. They both have old second hand 32 bit laptops from the 00's.

> but you seem to paint it as some kind of affordability frontier...

Yes because there are people still using old hardware because they have no choice. Also, whats the problem with supporting old architectures? Plan 9 solved the portability problem and a prominent user recently ported it to cheap MIPS routers so we can run Plan 9 on cheap second hand network hardware. We have the tool chain support so we use it.

And believe me, I understand a raspberry pi or whatever is much faster and uses less power but I would rather we reduce e-waste where possible. I still run old 32 bit systems because they work and I have them.

cogman10 17 hours ago [-]
> whats the problem with supporting old architectures?

It's not free, it's not easy, and it introduces hard to test and rarely run code paths that may or may not have problems on the target architecture.

I think there's a pretty strong argument for running hardware produced in the last 10 years for the next 10 or 20 years. However, I think it should be recognized that there was massive advances in compute power from 2000 to 2010 that didn't happen from 2010 to 2025.

A Core 2 Quad (produced in 2010) has ~ 1/2 the performance of the N150 (1/4 the single core performance of the latest AMD 9950).

Meanwhile a Pentium 3 from 2000 has roughly 1/10th the performance of the same Core 2 Quad.

There are simply far fewer differences between CPUs made in 2010 and today vs CPUs made in 2000 to 2010. Even the instruction set has basically become static at this point. AVX isn't that different from SSE and there's really not a whole bunch of new instructions since the x64 update.

crote 15 hours ago [-]
> There are simply far fewer differences between CPUs made in 2010 and today vs CPUs made in 2000 to 2010.

I have stopped replacing machines (and smartphones) because they became outdated: the vast majority of compile tasks is finished in a fraction of a second, applications basically load instantly from SSD, and I never run out of RAM. The main limiting factor in my day-to-day use is network latency - and nothing's going to solve that.

My main machine is a Ryzen 9 3900X with 32GB of RAM and a 1TB SSD. And honestly? It's probably overkill. It's on the replacement list due to physical issues - not because I believe I'll significantly benefit from the performance improvements of a current-gen replacement. I'm hoping it'll last until AM6 comes around!

Every task is either "basically instantly", "finishes in a sip of coffee", or "slow enough for a pee break / email response / lunch break". Computers aren't improving enough to make my tasks upgrade to a faster category, so why bother?

monegator 20 hours ago [-]
Sometimes you do wonder if those 4chan memes about those who push rust rewrites are just memes or what..
steveklabnik 19 hours ago [-]
A maintainer of a project making a decision about their project is not someone pushing a re-write.
mrob 21 hours ago [-]
The announcement says:

>In particular, our code to parse .deb, .ar, .tar, and the HTTP signature verification code would strongly benefit from memory safe languages and a stronger approach to unit testing.

I can understand the importance of safe signature verification, but how is .deb parsing a problem? If you're installing a malicious package you've already lost. There's no need to exploit the parser when the user has already given you permission to modify arbitrary files.

Muromec 20 hours ago [-]
It is possible the deb package is parsed to extract some metadata before being installed and before verifying signature.

Also there is aspect of defence in depth. Maybe you can compromise one package that itself can't do much, but installer runs with higher priviledges and has network access.

Another angle -- installed package may compromise one container, while a bug in apt can compromise the environment which provisions containers.

And then at some point there is "oh..." moment when the holes in different layers align nicely to make four "bad but not exploitable" bugs into a zero day shitshow

SAI_Peregrinus 19 hours ago [-]
> It is possible the deb package is parsed to extract some metadata before being installed and before verifying signature.

Yes, .deb violates the cryptographic doom principle[1] (if you have to perform any cryptographic operation before verifying the message authentication code (or signature) on a message you’ve received, it will somehow inevitably lead to doom).

Their signed package formats (there are two) add extra sections to the `ar` archive for the signature, so they have to parse the archive metadata & extract the contents before validating the signature. This gives attackers a window to try to exploit this parsing & extraction code. Moving this to Rust will make attacks harder, but the root cause is a file format violating the cryptographic doom principle.

[1] https://moxie.org/2011/12/13/the-cryptographic-doom-principl...

move-on-by 18 hours ago [-]
Sorry in advance if this is a dumb question, but isn't Rust's 'Cargo' package manager one of the draws of Rust? While I follow along that Rust's memory safety is a big benefit, does not the package manager and the supply chain attacks that come along with it take away from the benefits? For reference, NPM has had no shortage of supply chain security incidents.

How would adding Rust to such core dependencies not introduce new supply chain attack opportunities?

SAI_Peregrinus 17 hours ago [-]
Cargo defaults to downloading from `crates.io` but can easily be configured to get its dependencies elsewhere. That could be an alternative registry run by a Linux distribution or other organization, or even just overriding paths to dependencies to where local copies are stored. I'd expect a distro like Debian to mandate the use of an internal crate registry which mirrors the crates they're choosing to include in the distro with the versions they're choosing. This adds supply chain attack opportunities in the same way that adding any software adds supply chain attack opportunities, the use of `cargo` instead of `curl` to download sources doesn't change anything here.
jwilk 15 hours ago [-]
> Their signed package formats (there are two) add extra sections to the `ar` archive for the signature

APT does not verify package-level signatures (and nobody uses them anyway), so this is irrelevant.

Kwpolska 20 hours ago [-]
The parser can run before the user is asked for permission to make changes. The parsed metadata can then discourage the user from installing the package (e.g. because of extremely questionable dependencies).

Dependencies are probably in the apt database and do not need parsing, but not everything is, or perhaps apt can install arbitrary .deb files now?

jwilk 15 hours ago [-]
Yes, you can do "apt-get install --dry-run ./nyancat_1.5.2-0.2_i386.deb" these days.
marcosdumay 20 hours ago [-]
.deb is a packaging format like any other. There are plenty of reasons for parsing without running the code inside them.
renewiltord 18 hours ago [-]
Preferably one is not able to pwn a package repository by uploading single malicious .deb file to it. e.g. people on Ubuntu frequently use PPAs (private package archives). You can run your own on Launchpad. If you upload malicious package, it should not destroy Launchpad.
phkahler 18 hours ago [-]
I don't think programs should use mixed languages if its at all avoidable. Linux would be an exception because I think it can benefit from oxidation and it'll be decades before RedoxOS is ready.
Surac 17 hours ago [-]
my biggest problem with rust is. i can't read it. i never know what this symbol means. is it a keyword? a type, a variable, a constant or a macro? Sure loading it into a IDE with a language server may help understanding the code.
pornel 13 hours ago [-]
That's just your lack of familiarity with the foreign-to-you language (you may be unable to read Korean too, despite Korean being pretty readable).

Syntactically, Rust is pretty unambiguous, especially compared to C-style function and variable definitions. You get fn and let keywords, and definitions that are generally read left-to-right, instead of starting with an arbitrary identifier that may be a typedef, a preprocessor macro, or part of a type that is read in so-called "spiral" order (which isn't even a spiral, but more complex than than).

xiej 15 hours ago [-]
There are only a few common keywords. Types are PascalCase. Variables are snake_case. constants are SCREAMING_CASE. macros have an exclamation point !
ChrisArchitect 21 hours ago [-]
Related:

Hard Rust requirements from May onward

https://news.ycombinator.com/item?id=45779860

scotty79 21 hours ago [-]
Maybe there's a place for Future Debian distro that could be a place for phasing out old tech and introducing new features?
powerclue 20 hours ago [-]
Or maybe old devices and tech should expect a limited support window, or be expected to fork after some time?
yjftsjthsd-h 17 hours ago [-]
Some of us abandoned commercial OSs for Debian precisely to escape that mentality.
powerclue 16 hours ago [-]
I think that's reasonable, but surely there's a limit? Like, if one user exists on an old piece of tech, does Debian need to support them forever?

I think this is a nuanced call, personally, and I think there's some room for disagreements here. I just happen to believe that maybe the right decision is to fork at some point and spin off legacy forks when there's a vanishingly small suite of things that cause friction with progress.

yjftsjthsd-h 15 hours ago [-]
That's fair. There's even precedent in the form of e.g. https://archlinux32.org/ , though I personally view that kind of fracturing as undesirable. Personally I'd rather lean the other way; if folks want to be forward-thinking at the expense of breaking compatibility, they could just go work on Fedora or Arch or any other distro that wants to be the future instead of "the universal operating system".

In this particular case, though, I would specifically argue that it's a poor trade-off; AIUI, the whole thing comes down to a tiny bit of functionality that shouldn't even be in core apt, that is of little use to most of the community, and that really should be factored out into an optional additional package anyways, at which point it need not affect less popular ports.

1313ed01 13 hours ago [-]
I used Debian since around version 1.2 (even if not always as my main desktop OS) but increasingly using FreeBSD and NetBSD on my old computers.
ForHackernews 19 hours ago [-]
It sounds like all of the affected Debian ports are long since diverged from the official Debian releases anyway:

> The sh4 port has never been officially supported, and none of the other ports have been supported since Debian 6.0.

Wikipedia tells me Debian 6 was released on 6 February 2011

gpm 19 hours ago [-]
Isn't that literally what debian unstable is for?
ethin 19 hours ago [-]
This is just one reason I'm not the biggest fan of Rust. The language is good (as well as what it solves), but this tendency to force it into everything (even where it would provide no benefit whatsoever) is just mind-boggling to me. And the Rust evangelists then wonder why there are so many anti-rust folk.
koakuma-chan 20 hours ago [-]
[dead]
bakugo 21 hours ago [-]
[flagged]
fn-mote 21 hours ago [-]
> What is it about Rust fanatics [....]

The universalization from one developer's post to all Rust "fanatics" is itself an unwelcome attack. I prefer to keep my discussion as civilized as possible.

Just criticize the remark.

tbrownaw 20 hours ago [-]
I read that more as "here's a perfect example of something I'd noticed already" rather than "wow this is a terrible first impression your group is making".

Perhaps this reading is colored by how this same pair of sentiments seems to come up practically every single time there's a push to change the language for some project.

estimator7292 21 hours ago [-]
[flagged]
jvanderbot 21 hours ago [-]
I think you'll experience some pushback on the assertion that that particular quote has a lot of arrogance or disdain in it.

Building large legacy projects can be difficult and tapping into a thriving ecosystem of packages might be a good thing. But it's also possible to have "shiny object" or "grass is greener" syndrome.

NetMageSCW 21 hours ago [-]
“If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.”

If that’s not arrogant, I don’t know what is.

Cthulhu_ 21 hours ago [-]
Is it arrogant or a clear and straightforward announcement that a Decision has been made and these are the consequences? I'm not seeing any arrogance in the message myself.
jvanderbot 21 hours ago [-]
"Arrogant" does not mean "forceful" or "assertive" or "makes me angry".

This is forceful, assertive, and probably makes people angry.

Does the speaker have the authority to make this happen? Because if so, this is just a mandate and it's hard to find some kind of moral failing with a change in development direction communicated clearly.

powerclue 20 hours ago [-]
How is this arrogant? Are open source developers now responsible for ensuring every fork works with the dependencies and changes they make?

This seems like a long window, given to ports to say, "we are making changes that may impact you, heads up." The options presented are, frankly, the two primary options "add the dependency or tell people you are no longer a current port".

bakugo 21 hours ago [-]
> I think you'll experience some pushback on the assertion that that particular quote has a lot of arrogance or disdain in it.

It's just a roundabout way of saying "anything that isn't running Rust isn't a REAL computer". Which is pretty clearly an arrogant statement, I don't see any other way of interpreting it.

simonask 21 hours ago [-]
Be real for a second. People are arguing against Rust because it supports fewer target architectures than GCC. Which of the target architectures do you believe if important enough that it should decide the future development of apt?
bakugo 16 hours ago [-]
I won't be real for a second, because this isn't about that.

Arguing that support for certain architectures should be removed because they see very little real world use is totally valid. But it's possible to do so in a respectful way, without displaying such utter contept for anyone who might disagree.

jvanderbot 20 hours ago [-]
I read it as a straightforward way of saying "support for a few mostly unused architectures is all that is holding us back from adopting rust, and adopting rust is viewed as a good thing"
portly 21 hours ago [-]
Is it the borrow checker? Normally rust had your back when it comes to memory oopsies. Maybe we need a borrow checker for empathy..
amarant 21 hours ago [-]
from the outside it looks like a defense mechanism from a group of developers who have been suffering crusades against them ever since a very prolific c developer decided rust would be a good fit for this rather successful project he created in his youth.
elteto 21 hours ago [-]
Maybe they are just really tired of having to deal with people who constantly object and throw every possible obstacle they can on the way.
bakugo 21 hours ago [-]
Maybe they wouldn't experience so much pushback if they were more humble, had more respect for established software and practices, and were more open to discussion.

You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW" at everyone all the time and expect people to not react negatively.

anarki8 21 hours ago [-]
> You can't go around screaming "your code SUCKS and you need to rewrite it my way NOW"

It seems you are imagining things and hate people for the things you imagined.

In reality there are situations where during technical discussions some people stand up and with trembling voice start derailing these technical discussions with "arguments" like "you are trying to convince everyone to switch over to the religion". https://youtu.be/WiPp9YEBV0Q?t=1529

simonask 21 hours ago [-]
That’s also not something anybody has actually said.
lkjdsklf 21 hours ago [-]
While no one has explicitly said that, it is the implied justification of rewriting so much stuff in rust
simonask 20 hours ago [-]
I disagree very strongly that a suggestion to change something is also a personal attack on the author of the original code. That’s not a professional or constructive attitude.
bakugo 21 hours ago [-]
Are you serious? It's basically impossible to discuss C/C++ anymore without someone bringing up Rust.

If you search for HN posts with C++ in the title from the last year, the top post is about how C++ sucks and Rust is better. The fourth result is a post titled "C++ is an absolute blast" and the comments contain 128 (one hundred and twenty eight) mentions of the word "Rust". It's ridiculous.

simonask 20 hours ago [-]
Lots of current and former C++ developers are excited about Rust, so it’s natural that it comes up in similar conversations. But bringing up Rust in any conversation still does not amount to a personal attack, and I would encourage some reflection here if that is your first reaction.
woodruffw 21 hours ago [-]
To be clear, the "you" and "my" in your sentence refer to the same person. Julian appears to be the APT maintainer, so there's no compulsion except what he applies to himself.

(Maybe you mean this in some general sense, but the actual situation at hand doesn't remotely resemble a hostile unaffiliated demand against a project.)

yjftsjthsd-h 20 hours ago [-]
> Julian appears to be the APT maintainer, so there's no compulsion except what he applies to himself.

To who is this addressed?

> If you maintain a port without a working Rust toolchain, please ensure it has one within the next 6 months, or sunset the port.

Because that sure reads as a compulsion to me.

lagniappe 21 hours ago [-]
The endless crusades are indeed tiresome.
bryanlarsen 21 hours ago [-]
Yes, the immediate and endless backlash we get whenever anybody says the word "Rust" is quite tiresome.
lagniappe 11 hours ago [-]
Ah yes, the signature snark from the Rust community. This is the type of thing that repels people.
donkeybeer 21 hours ago [-]
No, honestly Rust has just really crappy attitude and culture. Even as a person who should naturally like Rust and I do plan to learn it despite that I find these people really grating.
chillfox 21 hours ago [-]
[flagged]
simonask 21 hours ago [-]
And just like vegans, their detractors are far more vocal in reality.
OptionX 21 hours ago [-]
Untrue.
yoyohello13 20 hours ago [-]
As evidenced by this very comment chain. I've seen, by far, way more comment from people annoyed by vegans. I can't even remember the last time I've heard a vegan discuss it outside of just stating the food preference when we got out to eat.
OptionX 17 hours ago [-]
If I don't see it it doesn't happen is the definition of anedoctal.
yoyohello13 16 hours ago [-]
Why is your anecdote more valid than mine?
kitsune1 21 hours ago [-]
>reality In reality everyone eats meats because it's what the human body evolved to consume. There's nothing to detract
tcfhgj 21 hours ago [-]
actually a vegan has to preach to some degree, otherwise it would be like a human rights advocate looking away when humans are tortured
powerclue 20 hours ago [-]
As a vegetarian on ethical grounds (mostly due to factory farming of meat) I politely disagree with your assessment.

I have to decline and explain in social settings all the time, because I will not eat meat served to me. But I do not need to preach when I observe others eating meat. I, like all humans, have a finite amount of time and energy. I'd rather spend that time focused on where I think it will do the greatest good. And that's rarely explaining why factory farming of meat is truly evil.

The best time is when someone asks, "why don't you eat meat?" Then you can have a conversation. Otherwise I've found it best to just quietly and politely decline, as more often than not one can be accommodated easily. (Very occasionally, though, someone feels it necessary to try and score imaginary points on you because they have some axe to grind against vegetarians and vegans. I've found it best to let them burn themselves out and move on. Life's too short to worry about them.)

Cthulhu_ 21 hours ago [-]
That's a bit of a jump. Veganism is a personal lifestyle / dietary choice. Objecting to livestock is activism. You can do either without the other.
tcfhgj 20 hours ago [-]
it's not just a dietary choice and it's a personal lifestyle in the sense of it being your choice, but not in the sense of a lifestyle which is limited to your private space.

You think it's wrong abusing animals. Why would you relate that only to you and think it would be ok for others to abuse them? You wouldn't

yoyohello13 19 hours ago [-]
Frankly, I more often see meat eaters get defensive. We got to a restaurant, the vegan guy gets a meatless meal. The vegan guy gets bombarded with "Oh, you don't eat meat?" "Why?" "What's wrong with eating meat?" "I just like having a steak now and then."
leoh 20 hours ago [-]
I hate learning new things. It sucks. Also, I hate things that make my knowledge of C++ obsolete. I hate all the people that are getting good at rust and are threatening to take away my job. I hate that rust is a great leveler, making all my esoteric knowledge of C++ that I have been able to lord over others irrelevant. I hate that other people are allowed to do this to me and to do whatever they want, like making the decision to use rust in apt. It’s just sad and crazy to me. I can’t believe it. There are lots of people like me who are scared and angry and we should be able to control anyone else who makes us feel this way. Wow, I’m upset. I hope there is another negative post about rust I can upvote soon.
Havoc 19 hours ago [-]
Think tech space isn’t for you if you hate learning new things.
stodor89 19 hours ago [-]
It's sarcasm.
tucnak 19 hours ago [-]
Can you confirm these C++ fascists you speak of are in the room with you right now?
temptemptemp111 18 hours ago [-]
[dead]
malcolmgreaves 18 hours ago [-]
Nice /s
dathinab 20 hours ago [-]
Why is this still a discussion?

> was no room for a change in plan

yes, pretty much

at least the questions about it breaking unofficial distros, mostly related to some long term discontinued architectures, should never affect how a Distro focused on current desktop and server usage develops.

if you have worries/problems outside of unsupported things breaking then it should be obvious that you can discuss them, that is what the mailing list is for, that is why you announce intend beforehand instead of putting things in the change log

> complained that Klode's wording was unpleasant and that the approach was confrontational

its mostly just very direct communication, in a professional setting that is preferable IMHO, I have seen too much time wasted due to misunderstandings due to people not saying things directly out of fear to offend someone

through he still could have done better

> also questioned the claim that Rust was necessary to achieve the stronger approach to unit testing that Klode mentioned:

given the focus on Sequoia in the mail, my interpretation was that this is less about writing unit tests, and more about using some AFIK very well tested dependencies, but even when it comes to writing code out of experience the ease with which you can write tests hugely affects how much it's done, rust makes it very easy and convenient to unit test everything all the time. That is if we speak about unit tests, other tests are still nice but not quite at the same level of convenience.

> "currently has problems with rebuilding packages of types that systematically use static linking"

that seems like a _huge_ issue even outside of rust, no reliable Linux distros should have problems with reliable rebuilding things after security fixes, no matter how it's linked

if I where to guess there this might be related to how the lower levels of dependency management on Linux is quite a mess due to requirements from 90 no longer relevant today, but which some people still obsess over.

To elaborate (sorry for the wall of text) you can _roughly_ fit all dependencies of a application (app) into 3 categories:

1. programs the system provides (opt.) called by the app (e.g. over ipc, or spawning a sub process), communicating over well defined non language specific protocols. E.g. most cmd-line tools, or you systems file picker/explorer should be invoked like that (that it often isn't is a huge annoyance).

2. programs the system needs to provide, called using a programming language ABI (Application Binary Interface, i.e. mostly C ABI, can have platform dependent layout/encoding)

3. code reused to not rewrite everything all the time, e.g. hash maps, algorithms etc.

The messy part in Linux is that for historic reasons the later two parts where not treated differently even through they have _very_ different properties wrt. the software live cycle. For the last category they are for your code and specific use case only! The supported versions usable with your program are often far more limited: Breaking changes far more normal; LTO is often desirable or even needed; Other programs needing different incompatible versions is the norm; Even versions with security vulnerabilities can be fine _iff_ the vulnerabilities are on code paths not used by your application; etc. The fact that Linux has a long history of treating them the same is IMHO a huge fuck up.

It made sense in the 90th. It doesn't anymore since ~20 years.

It's just completely in conflict with how software development works in practice and this has put a huge amount of strain on OSS maintainers, due to stuff like distros shipping incompatible versions potentially by (even incorrectly) patching your code.... and end users blaming you for it.

IMHO Linux should have a way to handle such application specific dependencies in a all cases from scripting dependencies (e.g. python), over shared object to static linking (which doesn't need any special handling outside of the build tooling).

People have estimated the storage size difference of linking everything statically, and AFIK it's irrelevant in relation to availability and pricing on modern systems.

And the argument that you might want to use a patched version of a dependency "for security" reasons fails if we consider that this has lead to security incidents more then one time. Most software isn't developed to support this at all and bugs can be subtle and bad to a point of a RCE.

And yes there are special cases, and gray areas in between this categories.

E.g. dependencies in the 3rd category you want to be able to update independently, or dependencies from the 2nd which are often handled like the 3rd for various piratical reasons etc.

Anyway coming back the the article Rust can handle dynamic linking just fine, but only for C ABI as of now. And while rust might get some form of RustABI to make dynamic linking better it will _never_ handle it for arbitrary libraries, as that is neither desirable nor technical possible.

---

EDIT: Just for context, in case of C you also have to rebuild all header only libraries using pre-processor macros, not doing so is risky as you now mix different versions of the same software in one build. Same (somewhat) for C++ with anything using template libraries. The way you can speed it up is by caching intermediate build artifacts, that works for rust, too.

renewiltord 18 hours ago [-]
Overall, I think Rust is probably too dangerous to introduce into core software. Every time there is a donation to the Rust Foundation, the Rust community is in an uproar that it is not a large enough fraction of gross revenue. Linux, apt, are all currently both free as in speech and free as in beer. If we have to start donating to the Rust Foundation a percentage of gross revenue for every tool that we use written in Rust, it will cost a lot. Probably much better to just not put Rust in the kernel or in apt.

It's a Trojan horse language. There are no demands from C users that anyone donate to C non-profits. Much better, safer language to use from an ecosystem perspective.

steveklabnik 18 hours ago [-]
> Every time there is a donation to the Rust Foundation, the Rust community is in an uproar that it is not a large enough fraction of gross revenue.

Where are you seeing this happen? I'm curious because I never have, which means that I'm missing out on discussions somewhere.

renewiltord 18 hours ago [-]
Sure, it's on Reddit /r/rust. I'll provide links at the end, but it happens every time there is a donation.

> > > Multi trillion dollar conglomerate invests a minuscule fraction of a fraction of their monthly revenue into the nonprofit foundation that maintains the tool that will save them billions

> > 115 Billion / 365 days / 1440 minutes = ~ 220k

> > So they make around 220k per minute, 350k in under 2 minutes. Still, any amount is better than nothing at all.

> I genuinely hate the thought of "Better than nothing". We should be saying "go big or go home."

Pretty popular thread. Approximate 900 upvotes over those three comments.

> That's like a millisecond of Google revenue.

and separately

> I mean thats nice. But in all honesty, is 1 Million still a lot?

That's from a while ago so it's smaller. But you can tell the sentiment is rising because these expressions of "it's not enough" are becoming more popular. Just a matter of time before the community tries to strong-arm other orgs with boycotts and this and that. We've seen this before.

> > Cool, but also depressing how relatively small these “huge” investments in core technologies actually are.

> Yeah, seriously. This is comparatively like me leaving a penny in the “take a penny leave a penny” plate at the gas station.

and separately

> Ah yes, the confidence displayed by allocating 0.0004% of your yearly revenue.

> Satya alone earns that in a 40 hr work week.

It's a pretty old playbook to use free-software language to get one's technology entrenched, then there's murmurs about how not enough money is being sent back to the people making it, the organization then uses the community as the stalking horse to promote this theory, and then finally comes the Elastic License relicensing.

Elastic did it. MongoDB did it. Hashicorp did it. Redis did it. I get the idea, but we should pre-empt the trojan horse when we see it coming. I know I know. You can fork when it happens etc. but I'm not looking forward to switching my toolchain to "Patina" or whatever they call the fork.

And if you think I'm some guy with an axe to grind, I have receipts for Rustlang enthusiasm:

https://news.ycombinator.com/item?id=24127438

https://news.ycombinator.com/item?id=24328686

https://news.ycombinator.com/item?id=30756184

https://news.ycombinator.com/item?id=37573389

https://news.ycombinator.com/item?id=41639619

List of donation posts follows:

https://old.reddit.com/r/rust/comments/1noyqak/media_google_...

https://old.reddit.com/r/rust/comments/1ajm56w/google_donate...

https://old.reddit.com/r/rust/comments/1cnehqt/microsofts_1m...

steveklabnik 17 hours ago [-]
Thank you! I read the Rust subreddit less these days, I know I missed at least some of those threads. Maybe I should pay more attention again...

I agree those people are being pretty ridiculous.

> And if you think I'm some guy with an axe to grind

Nah, I was just like "hmm, I don't remember really seeing that, interesting." An actual honest question, no shade implied.

major505 18 hours ago [-]
IS funny because thanks to efforts from companies like Valve, Linux seens to finally been receiving the recognition it deserves, Rust evangelists and weirdos who claim morality superiority to the rest of mortals, are going to put everything to loose, because of this obsession of rewriting tools that actually work for many years without bigger issues, just so they can say their language is superior.

IF there was some glaring problems with tools like sudo, sort, apt, and you have a superior version, sure, go ahead. But this is clearly not the case. Sometimes the rust version is just the same, or even inferior, but people are ready to plunge into destruction just to say their distro have the latest and greatest. Its just vanity.

Maybe the conspiracy theories that big tech finance the crazy incopetent people from in position of power in open source projects that they no longer can compete, in order to destroy them from the inside, is not that far fetched.

ekjhgkejhgk 19 hours ago [-]
This thing gets everywhere.
tyfon 19 hours ago [-]
I have a dual pentium pro 200 that runs gentoo and openbsd, but rust doesn't ship i586 binaries, only i686+. So I would need to compile on a separate computer to use any software that is using rust.

There is already an initrd package tool I can't use since it is rust based, but I don't use initrd on that machine so it is not a problem so far.

The computer runs modern linux just fine, I just wish the rust team would at least release an "i386" boostrap binary that actually works on all i386 like all of the other compilers.

"We don't care about retro computers" is not a good argument imho, especially when there is an easy fix. It was the same when the Xorg project patched out support for RAMDAC and obsoleted a bunch of drivers instead of fixing it easily. I had to fix the S3 driver myself to be able to use my S3 trio 64v+ with a new Xorg server.

/rant off

cogman10 19 hours ago [-]
This sounds like it's fun. However, I have to ask, why should the linux world cater to supporting 30 year old systems? Just because it scratches an itch?

You can grab a $150 NUC which will run circles around this dual pentium pro system while also using a faction of the power.

You obviously have to do a lot of extra work, including having a second system, just to keep this old system running. More work than it'd take to migrate to a new CPU.

[1] https://www.amazon.com/KAMRUI-AK1PLUS-Processor-Computer-Eth...

grayhatter 19 hours ago [-]
> You can grab a $150 NUC

I grew up without money, it makes me laugh when I read comments like this. You can just, yeah when you're fortunate enough to have a strong support system; you can.

My understanding is that the systems are not meaningfully common, and are hobbyist archs. But the idea that dropping support is fine because you can just throw money at it is so incredibly divorced from reality that I actually feel bad for anyone that believes this.

I deeply believe that if you don't like what a maintainer of FOSS code has done, you should fork the project. Admittedly that's a very onerous suggestion. But more important than that, you should help people when you can. If you're deciding to drop support for a bunch of people because it makes your job easier or simpler, when you don't need to. You're the bad guy in the story. That's the way this announcement has been written, and most reasonable people object to that kind of behavior. Selfishness should feel a bit offensive to everyone.

cogman10 18 hours ago [-]
Your post is offensive to me.

I have plenty of relatives without money or resources and $150 is something they can all afford.

It's not even the floor of the amount of money needed (Here's a used NUC for $30 [1]), but rather just showing that a new system can be had for a lot less than many people expect.

You are the one divorced from reality if you think there's an army of poor orphans running modern linux on pentium pros.

Affording rent and health insurance is a FAR bigger issue than being able to throw a little money towards a new computer once every 10 years.

[1] https://www.ebay.com/itm/366000004972?_skw=NUC&itmmeta=01KAY...

pornel 10 hours ago [-]
The situation today is very different than what it used to be when people actually used 386 or Amigas because they had no other options (BTW, Rust supports m68k, just not AmigaOS specifically).

Today even crappiest old PCs that you can fish out of a dumpster are already new enough to have Rust/LLVM support. We have mountains of Rust-compatible e-waste that you can save from landfill. Take whatever is cheapest on eBay, or given away on your local FB marketplace, and it will run Rust, and almost certainly be orders of magnitude faster and more practical than the unsupported retro hardware.

Using actual too-niche-for-Rust hardware today is more expensive. Such machines are often collectors' items, and need components and accessories that are hard to obtain, or need replacements/adapters that can be custom low-volume products.

Even if you can put together something from old-but-not-museum-yet parts, it's not going to make more sense economically than getting an older-gen Raspberry PI kit or its Ali Express knock-offs (there are VGA dongles more expensive than some of these boards).

It's fine to appreciate SGI and DEC Alpha, have fun using BE OS, or prove that AmigaOS is still a perfectly fine daily driver, but let's not pretend it's a situation that people are in due to economic hardship.

grayhatter 10 hours ago [-]
> but let's not pretend it's a situation that people are in due to economic hardship.

I'd encourage you to not strawman my response. Because I already said myself that it appears to me it's only hobbyists who are losing support.

My objection isn't to the argument that it's dropping support, my objection is that it's dropping support without cause. Other than, the assumed this would be more comfortable for me.

Maintainers are absolutely not required to support everything for ever, but I recall a story where someone from Linux paid for a user to upgrade, not because that was required, because more because that would make dropping support for that floppy driver feel ethical.

This is the level of compassion everyone should expect from software engineers in critical positions of power.

I have no sympathy for people who lack the compassion to expend the effort to help others. I do have sympathy for people who have to watch the world that they, even if it's them alone. Have to watch their world get worse, so that others can avoid a trivial amount of perceived discomfort.

Should this solo maintainer (who understands C) be required do things exactly the way that I want? Of course not, but I'll be damned if everyone expects me to remain silent while I watch them disrespect other people who were previously depending on their support.

tyfon 19 hours ago [-]
The system is actually running fine standalone since I have been able to avoid rust software.

As to why it should cater to it, it's more that there is no need to remove something that already works just to remove it.

It is possible to compile rustc on another system so it supports i586 and below. Just a small change in the command line options. And it doesn't degrade the newer systems.

I have plenty of faster machines, I just enjoy not throwing things away or making odd systems work. It's called having fun :)

cogman10 19 hours ago [-]
> it's more that there is no need to remove something that already works just to remove it.

There actually is. Support for old systems isn't free. Mistakes in the past are hard to fix and verify on these old systems. Particularly, the fact that there's not a whole lot of devs with access to dual pentium pro systems to verify changes which would affect such systems.

That means that if there's a break in the kernel or elsewhere that ultimately impacts such system they'll hear from a random retro computing enthusiast which takes time from everyone to resolve and review patches to fix the retro computer.

Time is precious for open source software. It's in limited supply.

I get doing this for fun or the hell of it. But you do need to understand there are costs involved.

ShroudedNight 19 hours ago [-]
I thought the Pentium Pro _was_ a 686?

Wikipedia seems to correlate: https://en.wikipedia.org/wiki/Pentium_Pro, as do discussions on CMOV: https://stackoverflow.com/a/4429563

tyfon 19 hours ago [-]
Yes, sorry I remembered incorrectly. The rust compiler claims to be i686 and the CPU is i686 too, but the rust compiler is using Pentium 4 only instructions so it doesn't actually work for i686.
ShroudedNight 19 hours ago [-]
Yeah, that sucks. I assume this is SSE2?
ShroudedNight 19 hours ago [-]
It does look like there are legitimate issues with x87 floating-point: https://github.com/rust-lang/rust/issues/114479
gpm 18 hours ago [-]
Related from a couple of days ago: A time-traveling door bug in Half Life 2

https://mastodon.gamedev.place/@TomF/115589875974658415

https://news.ycombinator.com/item?id=46009962

tyfon 19 hours ago [-]
That is correct :)

Edit: I see from the sister post that it is actually llvm and not rust, so I'm half barking up the wrong tree. But somehow this is not an issue with gcc and friends.

gpm 18 hours ago [-]
> "We don't care about retro computers" is not a good argument imho,

It absolutely is. If you want to do the work to support <open source software> for <purpose> you're welcome to do so, but you aren't entitled to have other people do so. There's some narrow exceptions like accessibility support, but retro computing ain't that.

ondra 19 hours ago [-]
Pentium Pro is the first i686 CPU, so you should be fine.
dontlaugh 19 hours ago [-]
Surely retro hardware is fine with retro software.
ForHackernews 19 hours ago [-]
I mean... Pentium Pro is 30 years old at this point. I don't think it's unreasonable that modern software isn't targeting those machines.
yjftsjthsd-h 17 hours ago [-]
So anyways, here's the netbsd docs for running the latest release on VAX: https://wiki.netbsd.org/ports/vax/
ForHackernews 15 hours ago [-]
Obscure retro OS runs on obscure retro hardware you say?
yjftsjthsd-h 15 hours ago [-]
NetBSD isn't a retro OS, nor is it particularly obscure. (For that matter, VAX isn't obscure, though it's very retro.)
ForHackernews 14 hours ago [-]
Maybe not for hacker news. In the real world, it's plenty obscure. Poll your family at Thanksgiving.

https://w3techs.com/technologies/details/os-netbsd

> NetBSD is used by less than 0.1% of all the websites whose operating system we know.

yjftsjthsd-h 9 hours ago [-]
I would expect the same number of laypeople to know about NetBSD and Debian (zero). Which gets neatly to an argument that I like for this kind of thing: Don't be so quick to throw out the long tail, because you're on it.
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 12:00:42 GMT+0000 (Coordinated Universal Time) with Vercel.