I feel this gets to the core of why I like Arch so much. I’m a Linux novice, so for a long time I ran Ubuntu VMs when I needed to do stuff on Linux (this being before WSL). It worked well enough, but I never really felt that I properly knew what I was doing.
Then I tried installing Arch in a VM… and it took me several days and several attempts, but when I finally got it working I felt, for the first time ever, like I actually understood the system I was using. Now I have a webserver running Arch, and only a week or two ago installed Arch on an old PC to see if I could get a desktop working.
Of course, Arch is not easy, especially for a non-expert such as myself. Sometimes I have no idea how to solve a problem, or even what kind of software I need in the first place. For this reason, I’m planning to install Debian instead on the new laptop I’ve ordered (to replace my ~10 year old machine running Windows), in the hopes that it might have more stuff working out of the box. Still, I’d say that trying out Arch has immeasurably improved my knowledge, not just of Linux but of the underlying concepts behind modern computing.
(Oh, and the documentation’s amazing too!)
I love hearing that, because it was a goal of Arch from the very beginning: to stop fearing the commandline.
And I was the first alpha tester, in that I wanted to learn more about how the sausage was actually made, so to speak. I was comfortable using things like Linuxconf at the time, but its beginner-friendly veneer meant that I didn't really know what to do if it _wasn't_ there.
After tinkering with Crux and PLD for a bit, I wanted to go deeper and start from nothing. So I loaded up the LFS docs and just started typing in the shell stanzas to start building my compilation toolchain. In an effort to DRY as much as possible, the work also got placed into shell scripts, which eventually became PKGBUILD modules.
I started having way too much fun with it, so I put up the world's ugliest webpage to share my triumphs, and a couple people found it, somehow. That begat the immediate need for documentation, which eventually brought Arch into the forefront. I can't recall who spearheaded the Arch wiki, but we owe them a great debt, because it has become a valuable resource for Linux users, and not only the Arch users.
Arch is my happiest accident.
ps: btw, I run Arch (is this still a meme?)
Started a CS degree the following year, and I decided I wanted to take Linux more seriously, so I wiped Windows off my laptop and threw Arch on it to force myself to learn, and it's been my daily driver now for the last decade!
I built a tool that does this, you can look through the code and see how I do it--it's just bash spaghetti. Download a Debian live ISO (or use my tool to create an Arch-like minimal live USB) and you can install it however you want.
As a novice, if I am stuck somewhere the odds that I find the answer in the Arch Wiki or can ask an Arch enthusiasts and get an answer is orders of magnitude higher than the equivalent sources for Debian.
*  https://wiki.debian.org/DebianInstaller/Preseed
Honest question: why days?
I have installed arch multiple times in the past decade and I don’t remember anything exceptionally out of the ordinary. You just follow the step-by-step instruction and you are good to go.
It’s all fairly standard: boot on a live cd, get internet, format the disk, mount, arch provides a script to install the base and another to change the root and it’s vanilla Linux config from there.
Edit: Hmm, I guess how to configure a vanilla Linux might be quite complex for someone who has no idea of how to do that. I might have answered my own question actually.
Because I was a total novice. I barely even knew what a partition was, let alone anything else. Now, of course, I find it much simpler.
I think there’s a lot of decisions like that to make in Arch
The same thing happened to me the first few times I installed Gentoo. Could I have migrated the first install to what I wanted at and had at the end? Sure, if I had the knowledge I didn't have at the time.
You don’t really understand anything more except how to configure a system with a poorly designed configuration system. Installing a difficult-to-use Linux distribution teaches you nothing about operating systems, compilers, linkers & loaders, shared libraries, or anything else about the foundations of modern computing.
Besides, it’s not like this knowledge is useless. I now find myself being able to diagnose and fix problems with my system which previously I was clueless about. And it makes it a lot easier for me to learn the lower-level details if I so choose.
I love Arch for its simplicity and performance.
But it just wasn't productive for me to get everyday tasks done. I'm not advanced Linux user, occasionally I'd need hours to get seemingly simple stuff done.
For a hobby desktop, fine. For a work tool as a developer, I moved back to Ubuntu (though I have moments of regret every day).
You either have automation to make things easy to the user, but then it's no longer simple.
Or you require the user to do everything manually, but then it's no longer easy.
Perhaps with a focus on the audience (e.g. a web dev distro), it becomes viable to make sensible compromises on both (easiness & simplicity) that result in a good combination for the user.
Pop_os! is coming from the other direction, but I found I prefer using i3/sway instead as I had trouble configuring certain things about pop.
Only reason I've seen to use anything else is if doing a full desktop installation doesn't make sense, as the installer is largely geared around just installing most everything, or perhaps leaving out a category or two. Beyond that, the documentation is clear, and it's made as easy to use as makes sense without just hiding things behind a black box so that when things go wrong, they REALLY go wrong.
Nice to see some of these newer distros like Arch and Ubuntu are coming along though; choice is good.
One alternate approach I find interesting is that of NixOS: everything is declaratively specified in a single place, so it’s easy to configure stuff, but also easy to see how everything works. I actually tried out NixOS before installing Arch on the aforementioned desktop box, and if it wasn’t for the sheer opaqueness of Nixpkgs I’d still be using it. On the other hand, I can hardly call Nix ‘simple’… as with everything, it’s all tradeoffs, I suppose.
I'm going to try again with a different Ubuntu release as soon as I can find the patience. Where prior lessons have taught me to vandalise automatic update or it'll brick itself in the future.
Ease of use has not been the defining characteristic of Ubuntu.
With arch, sometimes it breaks (not that often), it always is quite clear what went wrong (pacman errors are limpid, logs are clear). You'll find the fix on the Arch Linux News, in the wiki or the forums. Bonus you probably know your system better and as a side effect be more efficient in fixing it.
We've seen many failures during upgrade, from both novice and advanced linux users, meanwhile one of our sysadmins accidentally upgraded Debian by two releases at once (From Debian 9 to 11) and it still "just worked"...
I run pure Debian stable everywhere I can in my personal life (laptops, desktops, servers). It's predictable and there's not a lot of planning at update OR upgrade time. Backup, push the button, reboot, done.
Just kidding, backups happen daily automatically, so it's just update and reboot.
You're describing endeavourOS. It's Arch, but with an installer that gets you a sane default desktop quickly.
It's what I use now that I'm too old to waste all that time it takes to get Arch running properly
IMO it combines the best of both worlds.
Maybe have a look at manjaro, it is based on arch, but comes prepackaged, so you can just boot up the luve image and install, if everything works.
Big ones are: shadiness with funding, letting their SSL certs expire 4 times, and the fact that their idea of stable isn't additional testing, but just letting the packages sit for a week.
There was also a recent kerfluffle not covered there where they shipped a broken kernel to Apple Sillicon users without contacting the Asahi devs: https://twitter.com/AsahiLinux/status/1576356115746459648
The recommendation from the repo, EndeavourOS sounds interesting, though.
The link for the post is dead, but they've let their SSL cert expire multiple times. While it happened a few years ago, I find that a hard thing to come back from.
edit: Beginner's guide, not install guide, is what was deleted
First it's as vanilla as possible, which mean that packages are modified as little as possible from upstream. This means you don't learn anything distro-specific by mistake, and you actually learn more how the package is intended to function.
The second is great documentation and community. The Arch wiki is full of common tweaks that you'll likely have to do, many other distro's may have just held your hand and assumed you wanted those boxes checked, but Arch makes you check them.
Being minimal also helps, it really doesn't overwhelm you. Arch doesn't, it's just that only at a time is usually broken, so you're doing something like that.
After using and contributing to Gentoo for a few years, I don't think I could confidently explain how all of the pieces of the desktop graphics and audio stack fit together - I just installed it and it worked.
I recently installed Archlinux32 on an old Pentium II machine just for the fun of it and was pleasantly surprised that it still feels reasonably responsive (I didn't get X11 to work yet though as the GPU driver for that machine apparently never reached mainline or was removed in the meantime).
Everything is managed by systemd/networkd the way it's authors intended. No custom scripts or other cruft or bloat. No 'helpful' background services to update man pages or the package database.
It's also refreshing how fast pacman is compared to apt.
Alpine is. If you think Arch is fast, try Alpine x86.
On X.Org, VESA works everywhere.
Also are you suggesting that Debian netinstall is not minimal?
To get a fully-working system, you'd also need a bootloader, but Arch doesn't prescribe what that has to be. Alpine is mostly going to give you these same thing in terms of available CLI utilities, but rather than being based on GNU libc and GNU coreutils, it's based on musl and busybox. The init system is OpenRC rather than systemd. And it has a default bootloader, which is syslinux.
This makes Alpine more "minimal" in the sense of a minimal installation taking up less disk space, because musl, busybox, and OpenRC are smaller in the literal sense of the binary files consume less disk space than glibc, GNU coreutils, and systemd. Busybox also comes with ash (I think actually dash) as the default shell, which smaller than bash.
I have no idea if the apk package manager is smaller than pacman. They're both smaller than what you'd get out of a Debian or Redhat descended system.
Personally, I think it's a bit misleading to call either of these more minimal than the other. The functionality, feature set, list of available utilities is pretty much the same. Alpine is just giving you smaller files, though note that using a musl-based system presents a lot of difficulty because a fair amount of software Linux users expect and are familiar with isn't really POSIX-compliant and only works with GNU C.
> Alpine Linux is designed to run from RAM
in the wiki almost makes it sound like OpenWRTs opkg where the root fs is readonly.
There are many custom scripts to wrap around systemd so you can still use old style commands. Apt is pretty slow (and again packages come with custom scripts) and stuff like apt-xapian-index will just gobble up all your CPU if you are on a slow system.
Debian has many scripts to automate things for a nice experience, but it's certainly not minimal.
Most defaults are sensible and close to what app itself provides, again with exceptions to play nicely with rest of the system.
Most packages also come with bunch of recommended ones that extend functionality which means a bit extra space used, but just
And most important thing is that upgrade works. I installed my desktop in ~2008 and just upgraded across the ages, the install older than every single component in my machine.
> There are many custom scripts to wrap around systemd so you can still use old style commands.
That's just not breaking old stuff, minimal doesn't really need to mean "just breaks your old scripts that worked fine up until now".
And it's kinda required for transitionary period, some packages still use /etc/init.d/* to start for example and AFAIK Debian still haven't said "systemd is the only way forward" which means many packages provide both /etc/init.d/* for SysV boot and /lib/systemd/system/* for Systemd boot.
> and stuff like apt-xapian-index will just gobble up all your CPU if you are on a slow system
How is tool that's not even in standard install relevant to anything?
I didn't know Alpine was useful as a working machine though.
This is true, also while Alpine is an excellent base image, some people have run into troubles with musl/busybox and prefer to use Debian/Ubuntu or whatever else they're familiar with as their container base.
Then again, I kind of went in the opposite direction and use Ubuntu as the base for all of my container images and install software "the normal way": for example, getting OpenJDK through apt as I would on a server with Ansible, or for my local dev machine, without any of the fanciful optimizations or clever hacks to keep the file sizes down.
The downside of this is that my base images are multiple hundreds of MB in size (even after cleaning apt cache in the same step as doing the install, to avoid adding that to the layers), but on the bright side that hardly matters because I use the same base images for all of my containers so only the changes for that particular image need to be transferred through the network and like 40-80% of the layers remain consistent: https://blog.kronis.dev/articles/using-ubuntu-as-the-base-fo...
It's not "optimal" from a size perspective, but it's delightfully simple and approachable.
My main point was as a response to the original question, which was: why aren’t more docs in wiki form? I think one reason for that is that good docs require dedicated contributors, and I think a wiki does little to nothing to reduce their burden.
The myth of the wiki is that by erecting it, drive-by contributors will build a great product. I don’t think any quality wiki was built that way.
... till they had a failure and discovered none of their backups was working.
Check your backups kids.
I use Debian Stable and Fedora on my systems and I have the latest versions of all of the software of which I want the latest versions because I can install software from source like a big boy. And my installers don't have version numbers hard-coded in them like PKGBUILDs do, so I get the latest versions immediately.
Recently made the jump to NixOS though and been really happy with the additional features it offers.
1) Keep both linux and linux-lts installed since otherwise you have no backup if the one you normally use doesn't work.
2) Always fully update the system since dependencies aren't always fully specified and a partial update can damage the rest of the system. If you need to hold back a package, add it to the IgnorePkg= line in /etc/pacman.conf until it works again.
3) Avoid AUR except for rare cases where you review the package manually (always avoid AUR helpers).
4) Don't be too lazy even though things mostly just work, check your boot logs at least every year or so to improve the chance of fixing issues before they cause trouble and look for and deal with pacnew files at least a few times a year.
I upgrade them when I get round to touching them for some reason so sometimes months will go by. I've never encountered a problem after upgrade that I couldn't quickly resolve with a brief bit of tinkering, and I'd take that over starting from scratch or leaving things mouldering away on outdated software any day.
This example and more such refined tools such as the AUR that add massive quality of life improvements to the overall Linux user experience is what keeps me happy at Arch.
This distro leaves all others in the dust in terms of speed and software availability; I will highly recommend it to everyone looking for a no-nonsense and up-to-date system.
But what if I need packages created and maintained by vetted, qualified devs rather than the unvetted randos that upload PGBUILDs to the AUR? Many of the AUR contributors I've looked into have no publicly-accessible real names, no personal websites, no LinkedIn accounts, and their GitHub accounts are only a couple years old with Japanese cartoon characters as their account photos.
Pay for them or package them yourself. The nerve of being angry at people giving you their work for free and having the *audacity* of thinking you should have access to their real name, personal websites, LinkedIn and GitHub account.
The level of entitlement dripping from your comment is disgusting.
You reading entitlement there and responding so hostile is on you.
And about project maintainers running PPAs - Say you're a dev, and you want to package binaries for your software. What's easier, whipping up a quick PKGBUILD once and putting it in your git server, thus allowing anyone to get the latest updated build at anytime, or setting up accounts and painstakingly compiling and updating each build to a PPA? Are you aware of the hundreds of abandoned PPAs that lie orphaned after maintainers gave up in frustration about how cumbersome they are?
Mainstream Linux distros feel a lot more like Windows these days. Sure they require less condiguration, but they're also mich harder to mess around with. Starting up htop reveals a jungle of daemons and weird systemd shit I don't even know what does. Systemd is a terribly documented nightmare to configure, etc.
It's so nice in Arch to know pretty much know what everything is for because I was the one who installed it. And to have documentation that isn't infuriating to navigate.
During that time, I oftened wondered whether I should "play"/experiment more with other distros; after all I loved tinkering with my vim config and network setups etc.
However, I've been just satisfied with the status quo, and more importantly: I just wanted to get shit done.
Apt, dpkg, systemd. If I want to get bleeding-edge SW I'll build the upstream source manually. No big deal - won't happen too often.
Getting older, I'm beginning to despise fixing the os more and more ... I just want the machine to work. This results perhaps from my day job, which involves openbsd-developing/tweaking ... And general a lot of cursing.
Granted: I'm not a gamer or graphics-enthusiast, and use my computer primarily for development, writing, watching movies/pictures ... Your typical senior resident trapped in the body of a 30ish guy.
I'm often wondering whether I'm just lazy and/or whether my attitude is the norm or rather the exception respective to Unix/Linux (power)users.
Edit: forgot to say "big thank you" to the arch community! Over the years I consulted the archwiki endless times! Almost everytime really helpful (in contrast to the debian wiki, lol)
That being said, I do run Arch on my laptop and desktop these days; I like being a little closer to upstream. I don't run a ton of bleeding edge software, but using aur makes it incredibly easy to stay up to date. I am also extremely appreciative of the Arch wiki, no matter what distro I'm using it's one of the first places I check out if I'm having an issue.
My Linux experience was pretty minimal. Some trying out on desktop in the early 2000s and later again after Ubuntu became a thing, but I always got weird errors. Then some in university, and again a bit to administer my VPS or rPi.
Arch was a breath of fresh air, not only could I get current packages, everything was so well documented! The wiki is, as the article rightfully says, amazing. Now, even when I’m not using arch (I have a small Proxmox server with Debian and Debian containers), I still use the archwiki as I know it will help me for everything but Debian specific things. My first arch install (before that, I never installed an OS without an installer) took maybe 2 hours.
The thing that makes the Arch docs so great is that it covers edge cases and has lots of examples.
For example right now the latest gdb is broken on both my machines, and since i’m not as keen to participate in troubleshooting new software I think I’ll be moving to a more stable distro pretty soon
Oh that distro is great. Is there some community package repo in that, equivalent to Arch's AUR? None? NVM.
It has been working pretty well for me, except for a couple of issues that I ran over these years (e.g. the transition to systemd in 2012/2013, or no pacman -Syu for several weeks).
It could get a bit tricky if you update very rarely, like once a year, but not sure what else there is to do to break things. Please elaborate a bit more.
Archlinux repo holds only a single latest version. There are some community maintained repos that contain older versions (and those can be easily fetched through downgrade utility).
When updating Archlinux, upgrade everything and then reboot. This way it should not break.
If you want to install something manually in arch it is better to create a package first (if it is not already in aur). That way pacman can check for corruptions before installing anything
With NixOS, the whole system configuration is declared starting from a single configuration file.
So, NixOS is great for addressing "I forgot how I set <whatever> up".
You'll likely end up using more disk space with NixOS if you're changing your system, since NixOS has functionality which makes it easy to rollback the system to earlier configurations.
I think especially the metrics for "simple" are very use(r) dependent. Simple for you or me means something entirely different than e.g. my Grandma (who has no computer). Personally I run four Linux machines, so they're customized anyway and "simple" for me means "I understand everything that's installed, because I installed and configured it". Someone running 4000 machines would probably have a slightly different opinion on what's "simple".
Small, well, it's not an embedded Linux with a small libc and -Os for sure. But I never felt Arch bloated.
As for "small", well, it varies, but Arch systems tend to bloat more with time as the system does not provide much for auditing and cleaning up your system, so the older an arch install gets, the more garbage it accumulates. Arch also tends to turn on as many options as possible for each of the things it packages, so many packages have a lot of optional dependencies made mandatory. This is not a unique problem to Arch; only Gentoo (and maybe Nix and friends) solve this one, and they have many other problems to contend with.
All of this is not to say that Arch is necessarily a poor choice. It's just not simple, nor small.
Regarding the "growing" OS, hm... I've manually checked what's installed on my Debians using aptitude, and removed old stuff. Similarly, I could let pacman produce a list of installed packages to audit and check which are not required anymore. Either case needs manual action, because the automatic tool will not know if I still need that random python lib I manually installed, or if it can be removed (truth be told: neither do I!). Now for unused dependencies this is different.
Pulling in lots of stuff is a problem, yes. I think this could be solved by building those packages with a modified PKGBUILD locally (making it more like Gentoo), but for my "Linux on a big machine" I never saw the need to try that.
Anyway, I didn't read it like you were claiming Arch to be a poor choice :)
Thanks for the reply, I really appreciate the perspective. I don't want to drag you into a discussion over details, especially since I don't feel like "you're wrong". So feel free to just let it stand like this. OTOH, what are your favorite glibc/coreutils/systemd/PAM replacements?
There may be simpler packaging solutions, like Slackware's or whatever postmarketOS uses, but those are limited, and you'll have trouble figuring out how to do things like selecting files you don't want the package manager to ever overwrite, or preventing installation/updates of specific packages, or other things you might need that go slightly beyond basic installation/updates.
Compared to BSD's, mediocre and subject to dramatic changes by nature.
This year I have done short reviews of FreeBSD, OpenBSD and NetBSD. IMHO the docs for Arch are more helpful than any of theirs.
You could say *BSD documentation is more "comprehensive" but I usually find myself wading through an ocean to find what I'm looking for. On top of that, every time you try to look elsewhere, you just see RTFM. I can respect the sentiment, but having different variations of the same documentation can help understanding imo
help #in the terminal.
I think Arch generally got its philosophy right. It’s pretty much the minimal set of tools to get an easy to update binary distribution. They don’t touch what they don’t have to.