Think of how much faster their servers would be with one of those Epyc consumer cpus.
I was about to ask people to donate, but they have $80k in their coffers. I realize their budget is only $17,000 a year, but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers as they are around under $2k under budget. If they have a fleet of these old servers I imagine a Zen5 one can replace at least a few of them and consume far less power and space.
> This means their servers are very old ones that do not support x86-64-v2. Intel Core 2 Duo days?
This is not always a given. In our virtualization platform, we have upgraded a vendor supplied VM recently, and while it booted, some of the services on it failed to start despite exposing a x86_64v2 + AES CPU to the said VM. Minimum requirements cited "Pentium and Celeron", so it was more than enough.
It turned out that one of the services used a single instruction added in a v3 or v4 CPU, and failed to start. We changed the exposed CPU and things have returned to normal.
So, their servers might be capable and misconfigured, or the binary might require more that what it states, or something else.
lucb1e 167 days ago [-]
A developer on the ticket writes: "Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3"
bayindirh 167 days ago [-]
Ooh. They are at least ~15 years old, then. Maybe they have scored on some old, 4 socket Dell R815s. 48 cores ain't that bad for a build server.
lucb1e 167 days ago [-]
It's kinda good they use such old systems, as the vast majority of pollution occurs during manufacturing of devices since we usually use them only a handful of years. Iirc the break-even point was somewhere around 25 years, as in, upgrading for energy efficiency then becomes worth it (source: https://wimvanderbauwhede.codeberg.page/articles/frugal-comp...). 15 goes a long way towards that!
On the other hand, I didn't dig very deep into the ticket history now but it sounds like this could have been expected: it broke once already 4 years ago (2021), so maybe planning an upgrade for when this happens again would be good foresight. Then again, volunteers... It's not like I picked up the work as an f-droid user either
NewJazz 167 days ago [-]
While I appreciate the sentiment, I think you may be misreading the "Emissions from production of computational resources" section of that link.
It says for servers that 13-21 years is the break even for emissions from production vs consumption.
The 25 year number is for consumer devices like phones and laptops.
I would also argue that average load on the servers comes into play.
miladyincontrol 166 days ago [-]
Moot point imo, no one says they have to buy new hardware. Used, affordable, but still much more modern hardware could still save them plenty on power usage and replace several systems with one.
ignoramous 167 days ago [-]
> about to ask people to donate, but they have $80k in their coffers
I'd still ask folks to donate. £80k isn't much at all given the time and effort I've seen their volunteers spend on keeping the lights on.
From what I recall, they do want to modernize their build infrastructure, but it is as big as an investment they can make. If they had enough in their "coffers", I'm sure they'd feel more confident about it.
It isn't like they don't have any other things to fix or address.
csdreamer7 167 days ago [-]
I would too but do you have a link to them talking about it?
$2-3k ? That’s barely the price of a lower end Threadripper bare cpu not a full Epyc server ???
wongarsu 167 days ago [-]
At our supplier $2k would pay for a 1U server with a 16 core 3GHz Epyc 7313P with 32GB RAM, a tiny SSD and non-redundant power.
$3k pays for a 1U server with a 32 core 2.6GHz Epyc 7513 with 128GB RAM and 960GB of non-redundant SSD storage (probably fine for build servers).
All using server CPUs, since that was easier to find. If you want more cores or more than 3GHz things get considerably more expensive.
Timshel 167 days ago [-]
Yes but thoose are Zen 3 Milan cpu released in 2021 I believe.
Not that they are bad and would not be way better than what they have, just that I though the parent was quite the optimist with his Zen4/Zen5 pricing.
wtallis 167 days ago [-]
OP did say "consumer Epyc", so presumably referring to the parts using the AM5 socket. From a quick check on Newegg, it looks like barebones servers for that platform start at under $1000, to which you need to add CPU, RAM, and storage. So a $3000 budget to assemble a low-end Zen4/5 EPYC server is realistic: $570 for the 16-core EPYC 4565P, a few hundred for DDR5 ECC unbuffered modules, a few hundred for an enterprise SSD, and you have a credible current-gen server from readily available parts at retail prices, without any of the enterprise pricing and procurement hassle.
BizarroLand 167 days ago [-]
I imagine they would need quite a few servers to replace their current setup.
Then there's also the overhead of setting up and maintaining the hardware in their location. It's not just a "solve this problem for ~$2,000 and be done with it".
I don't know the actual specs or requirements. Maybe 1 build server is sufficient, but from what I know there's nearly 4,000 apps on FDroid. 1 server might be swamped handling that much overhead in a timely manner.
wtallis 167 days ago [-]
One server with today's tech can easily replace several servers that are 12+ years old. 4000 apps doesn't sound like a lot of work for one machine, unless you assume almost all of them are releasing new builds more than once a week. A 16-core CPU can rebuild a full Gentoo desktop OS multiple times a week.
csdreamer7 167 days ago [-]
That was my intention; mATX AM5 parts.
speckx 167 days ago [-]
Is that $2k/$3k for the year?
wongarsu 167 days ago [-]
That's $2k/3k to get a box with fully assembled hardware delivered to your doorstep or to a DC of your choice.
Space in your basement or the colo rack of a datacenter along with power, data and cooling is an expense on top. But whatever old servers they have are going to take up more space and use more power and cooling. Upgrading servers that are 5+ years old frequently pays for itself because of the reduced operating costs (unless you opt for more processing power at equal operating cost instead)
c0balt 167 days ago [-]
Low end EPYC (16-24 cores) especially for older generations are not that expensive 800-1.2K ime. Less when in a second hand server.
doublepg23 167 days ago [-]
Perhaps the servers run Coreboot / Libreboot?
pclmulqdq 167 days ago [-]
I'm not even sure mainline Linux supports machines this old at this point. The cmpxchg16b instruction isn't that old, and I believe it's required now.
cwillu 167 days ago [-]
CMPXCHG8B is required as of a month or two ago, not 16B (i.e., the version from the 90's is now required)
32 bit Linux is still supported by the kernel... and... 'Debian, Arch, and Fedora still supports baseline x86_64'.
Please do not take things out of context.
FirmwareBurner 167 days ago [-]
>they have $80k in their coffers but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers
I would also like to know this.
Perz1val 167 days ago [-]
Yeah and everybody was complaining how slow the builds are for years. I really want to know too
pastage 167 days ago [-]
I would much rather they spent that on having the devs network and travel, the servers work.
melodyogonna 167 days ago [-]
Why are the builds failing then?
tcfhgj 167 days ago [-]
planned obsolescence by Google
shadowgovt 166 days ago [-]
Beginning to use a CPU opcode that is 19 years old doesn't feel like planned obsolescence. if anything, it feels like unplanned obsolescence... "Oh hell what do you mean your CPU doesn't have that opcode no we've just been running the compiler with the default flags and that opcode got added to the default two months ago after a 10-year fight about the possible consequences of changing defaults!"
Although I'm a little surprised to learn that the binary itself doesn't have enough information in its header to be able to declare that it needs SSSE3 to be executed; that feels like something that should be statically-analyzed-and-cached to avoid a lot of debugging headaches.
tcfhgj 166 days ago [-]
> "Oh hell what do you mean your CPU doesn't have that opcode [...]"
hobbyst dev? sure
Google? nope
shadowgovt 166 days ago [-]
Did they make any explicit guarantees that their newly-cut binaries would continue to support 20-year-old architectures?
Googlers aren't gods. It's a 100,000-person company; they're as vulnerable to "We didn't really think of that one way or the other" as anyone else.
ETA: It's actually not even Google code that changed (directly); Gradle apparently began requiring SSSE3 (https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153) and Google's toolchain just consumed the new constraint from its upstream.
Here, I'm not surprised at all; Google is not the kind of firm that keeps a test-lab of older hardware for every application they ship, so (particularly for their dev tooling) "It worked on my machine" is probably ship-worthy. I bet they don't even have an explicit architecture target for the Android build toolchain beyond the company's default (which is generally "The two most recent versions" of whatever we're talking about).
Angius 166 days ago [-]
They clearly don't
lupusreal 167 days ago [-]
Probably a case of "don't fix it if it ain't broke" keeping old machines in service too long, so now they broke.
FirmwareBurner 167 days ago [-]
That's like ignoring your 'Check Engine' light because the engine still runs.
benrutter 167 days ago [-]
This is pretty concerning, especially as FDroid is by far the largest non-google android store at the moment, something that I feel is really needed, regardless of your feelings about google.
Does anyone know of plans to resolve this? Will FDroid update their servers? Are google looking into rolling back the requirement? (this last one sounds unlikely)
dannyw 167 days ago [-]
I agree it’s a bit concerning but please keep in mind F-Droid is a volunteer-run community project. Especially with some EU countries moving to open source software, it would be nice to see some public funding for projects like F-Droid.
berkes 167 days ago [-]
> please keep in mind F-Droid is a volunteer-run community project.
To, me, that's the worrying part.
Not that it's ran by volunteers. But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Opposition to market dominance and monopolies by multibillion multinationals shouldn't just come from a few volunteers. If that's the case, just roll over and give up; the cause is lost. (As I've done, hence my defaitism)
Aside from that: it being "a volunteer ran community" shouldn't be put as an excuse for why it's in trouble/has poor UX/is hard to use/is behind/etc. It should be a killer feature. Something that makes it more resilient/better attuned/easier/earlier adopting/etc.
Dr4kn 167 days ago [-]
The EU governments should gradually start switching to open source solutions. New software projects should be open source by default and only closed if there is a real reason for it.
The EU is already home to many OS contributors and companies. I like the Red Hat approach where you are profitable, but with open source solutions.
It's great for governments because you get support, but it's much easier to compete, which reduces prices.
Smaller companies also give more of their money to open source. Bigger companies can always fork it and develop it internally and can therefore pressure devs to do work for less. Smaller companies have to rely on the projects to keep going and doing it all in house would be way too expensive for most.
ethbr1 167 days ago [-]
> I like the Red Hat approach where you are profitable, but with open source solutions.
The Red Hat that was bought by IBM?
I agree with your goals, but the devil is in the methods. If we want governments to support open source, the appropriate method is probably a legislative requirement for an open source license + a requirement to fund the developer.
lupusreal 167 days ago [-]
It seems like every other year I read a story about Munich switching to Linux. It keeps happening so evidently it's not sticking very well. Either there are usability or maintenance problems, or Microsoft's sales and lobbying is too effective.
FMecha 167 days ago [-]
idk if you meant this, but I thought of F-Droid and other major open source projects being publicly funded by EU.
croes 167 days ago [-]
>But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Always has been.
theLegionWithin 167 days ago [-]
Apple has an iPhone app store monopoly, but Google is the bad guy here?
hogwash
camdroidw 167 days ago [-]
Google has recently lost two cases against DoJ, keeping fingers crossed that Android will be divestituted.
shadowgovt 166 days ago [-]
It's interesting to me how people panicked about the idea that 23AndMe's bankruptcy implies that some unknown, untrusted third-party will have their genetic information, but people are also crowing at the idea that a company that has purchase history on all your smartphone apps (and their permissions, and app data backup) could be compelled by the government to divest that function to some unknown, untrusted third-party.
benrutter 167 days ago [-]
Hope I didn't come across as criticising FDroid here- It seems sucky to have build requirements change under your feet.
It's just I think that FDroid is an important project, and hope this doesn't block their progress.
nativeforks 167 days ago [-]
> Nice to see some public funding for projects like F-Droid
Definitely, SSE4.1 instruction set based CPU, for building apps in 2025, No way!!
happosai 167 days ago [-]
Maybe if f-droid is important to you, donate, so they can buy newer build server?
benrutter 167 days ago [-]
I'm not quite sure if I'm over reading into this, but this comes across as a snarky response as if I've said "boo, fdroid sucks and owes me a free app store!".
Appologies if I came across like that, here's what I'm trying to convey:
- Fdroid is important
- This sounds like a problem, not necessarily one that's any fault of fdroid
- Does anyone know of a plan to fix the issue?
For what it's worth, I do donate on a monthly basis to fdroid through liberapay, but I don't think that's really relevant here?
happosai 167 days ago [-]
You are right, my message comes through as too snarky. What I wanted to give is an actionable item for the readers here.
nativeforks 167 days ago [-]
This has now become a major issue for F-Droid, as well as for FOSS app developers. People are starting to complain about devs because they haven't been able to release the new version for their apps (at least it doesn't show up on F-Droid) as promised
chasil 167 days ago [-]
Is Westmere the minimum architecture needed for the required SSE?
Server hardware at the minimum v2 functionality can be found for a few hundred dollars.
A competent administrator with physical access could solve this quickly.
Take a ReaR image, then restore it on the new platform.
Where are the physical servers?
LtdJorge 167 days ago [-]
Zen 2 Epyc would barely double the price of older platforms if you buy an entire server, and would run circles around them.
chasil 167 days ago [-]
A slow computer that does what you want is infinitely more valuable than a fast computer that does not.
grim_io 167 days ago [-]
why would a fast computer refuse to do what you want?
chasil 167 days ago [-]
[flagged]
RealStickman_ 166 days ago [-]
1. That's still perfectly possible
2. We're talking about x86_64 CPUs here that have been open to install your own software basically since they existed
chasil 166 days ago [-]
More modern x86 comes with significant problems.
The minimum is now eight cores on a die for both AMD and Intel, so running a quad core system means staying on 14nm. You may loudly criticize holding back on a quad core system, but you aren't paying $47,500 per core to license Oracle Enterprise database.
The eight core minimum is a huge detriment for commercial software that is licensed by core.
This, and this alone, shatters your argument. Any other questions?
RealStickman_ 166 days ago [-]
You can still get quad cores, here's an Epyc CPU with four cores [0]
Here's also a recent Xeon quad core [1]
Beside that, could you please show me where the F-Droid build server uses an Oracle Database?
There are at least six Android app stores in China that have more than 100 million MAUs each: Huawei AppGallery, Tencent MyApp, Xiaomi Mi Store (or GetApps), Oppo, Vivo, and Honor stores.
IceWreck 167 days ago [-]
Huawei and Honor are seperate app stores?
And Oppo and Vivo too?
In both instances one company owns the other - why have competing app stores?
npn 167 days ago [-]
Because some dumbass decided to ban Huawei before, forcing Chinese brands to split itself to multiple sub brandings that operate independently.
TiredOfLife 167 days ago [-]
Huawei was banned because some dumbass at Huawei decided that sanction skirting was worth it
Amazon has a big one too. I also know of a popular one called Aptoide.
Dr4kn 167 days ago [-]
Amazon closes their app store on 2025-08-20, so in 7 days.
rs186 167 days ago [-]
*for non Fire devices.
yellowapple 167 days ago [-]
I could've sworn they'd already closed it for non-Fire devices.
Suppafly 163 days ago [-]
>This is pretty concerning, especially as FDroid is by far the largest non-google android store at the moment
That's almost certainly not true.
charcircuit 167 days ago [-]
>FDroid is by far the largest non-google android store at the moment
Samsung Galaxy Store is much much bigger.
ykonstant 167 days ago [-]
Funny true story: I got my first smartphone in 2018, a Samsung Galaxy A5. I have it to this day, and it is the only smartphone I ever used. This is the first time I hear about Samsung Galaxy store! (≧▽≦)
ozim 167 days ago [-]
Largest not run by the corporations then ;)
benrutter 167 days ago [-]
Yup! I missed that one because I didn't realise it still existed. Woops!
lucb1e 167 days ago [-]
> Are google looking into rolling back the requirement? (this last one sounds unlikely)
That's apparently what they did last time. From the ticket:
"Back in 2021 developers complained that AAPT2 from Gradle Plugin 4.1.0 was throwing errors while the older 4.0.2 worked fine. \n The issue was that 4.1.0 wanted a CPU which supports SSSE3 and on the older CPUs it would fail. \n This was fixed for Gradle Plugin 4.2.0-rc01 / Gradle 7.0.0 alpha 9"
1oooqooq 167 days ago [-]
why you read "google build tools cannot be built from source and it was compiled with an optional optimizations as required" and assume the right thing to do is to buy newer servers?
benrutter 166 days ago [-]
I'm not assuming anything, this is from a ticket for fdroid on google:
> Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3.[0]
I.e. the problem is because fdroid have older CPUs, newer ones would be able to build. I only mentioned it in terms of what the plans to fix might be. I have zero idea if upgrading servers is the best way to go.
Have you tried building AOSP from available sources?
Binaries everywhere. Tried to rebuild some of them with the available sources and noped the f out because that breaks the build so bad it's ridiculous.
zoobab 167 days ago [-]
"Binaries everywhere"
So much for "Open Source"
jeroenhd 167 days ago [-]
The binaries are open source, but Google doesn't design their build chain to recompile from scratch every time.
Also, you don't need to compile all of AOSP just to get the toolchain binaries.
orblivion 167 days ago [-]
With how strict F-Droid is I would have expected them to build from source all the way down. Though that sounds like a daunting task so I don't blame them.
gbin 167 days ago [-]
Everything is open source, if you can read assembly ;)
bluGill 167 days ago [-]
Machine code. Assembly is higher level. since data and instructions can be mixed machine code is harder to decode - that might be a byte of data or an instruction. Mel would have [ab]used this fact to make his programs work. It is worse on x86 where instructions are not fixed length but even on arm you can run into problems at times
snake42 167 days ago [-]
You can always lift machine code to assembly. Its a 1 to 1 process.
bluGill 167 days ago [-]
No you cannot. While it is 1 to 1, you still need to know where to start as if you start at the wrong place data will be interrupted as an asm instruction and things will decode legally - but invalidly. It is worse on CISC (like x86) where instructions are different length and so you can jump to the middle byte of a long instruction and decode a shorter instruction. (RISC sometimes starts to get CISC features as they add more instructions as well).
If the code was written reasonably you can usually find enough clues to figure out where to start decoding and thus get a reasonable assembly output, but even then you often need to restart the decoding several times because the decoder can get confused at function boundaries depending on what other data gets embedded and where it is embedded. Be glad self modifying code was going out of style in the 1980's and is mostly a memory today as that will kill any disassembly attempts. All the other tricks that Mel used (https://en.wikipedia.org/wiki/The_Story_of_Mel) also make your attempts at lifting machine code to assembly impossible.
Akronymus 167 days ago [-]
It definitely isnt a 1:1 process, as there are multiple ways to encode the same instruction (with possibly even having some subtle side effects based on the encoding)
... this is why we get DRM. Source modification is what hurts them.
rbanffy 167 days ago [-]
Yes. Sources available means nothing without a reproducible build process.
ivanjermakov 167 days ago [-]
So open source is only in the name, noted
pwdisswordfishz 167 days ago [-]
Debian also seems to have given up.
ethan_smith 167 days ago [-]
Using Docker with QEMU CPU emulation would be a more maintainable solution than recompiling aapt2, as it would handle future binary updates automatically without requiring custom patches for each release.
Even my last, crazy long in the tooth, desktop supported this and it lived to almost 10 years old before being replaced.
However at the same time, not even offering a fallback path in non-assembly?
wtallis 167 days ago [-]
> However at the same time, not even offering a fallback path in non-assembly?
There's probably not any hand-written assembly at issue here, just a compiler told to target x86_64-v2. Among others, RHEL 9 and derivatives were built with such options. (RHEL 10 bumped up the minimum spec again to x86_64-v3, allowing use of AVX.)
shadowgovt 166 days ago [-]
Or even, a compiler told to target nothing in particular, and a default finally toggled over from "Oh, we're 'targeting x86'? So CPUs from the early 2000s then" to "Oh, we're 'targeting x86'? So CPUs from the mid-2010s then."
vocx2tx 167 days ago [-]
Looking at the issue their builders seem to be Opterons G3 (K10?)[0]
at this point they're guzzling so much power the electricity is more expensive than replacement platform
ozim 167 days ago [-]
I can imagine this has to be like that as they usually get $1500 per month in donations.
You could buy a newer one but I guess they have other stuff they have to pay for.
WesolyKubeczek 167 days ago [-]
This is a bit of vicious circle. How much of that money goes even into keeping those servers running? The electricity bill alone, geez. They could do a dedicated fundraiser to get themselves two boxes that are a decade old and still have spare parts available, coming from Broadwell era, they will have enough instruction set support to cover the baseline towards which multiple distros are converging (Haswell and up).
Zak 167 days ago [-]
Given their target audience, they could probably just request a hardware donation. Some sysadmin out there is probably getting rid of exactly what they need.
Palomides 167 days ago [-]
if it's colocated (surely the case) they aren't paying per kWh
a012 167 days ago [-]
For $500 you can get a decent refurbished server on ebay that supports those “new” extensions
delfinom 167 days ago [-]
$1500 / month is probably swallowed by how much of powerpigs those Opertons are, like they are bad, real bad.
yaro330 167 days ago [-]
I am 100% sure that if they put out a call for action and asked for hardware donations they would be able to get newer stuff. Ryzen 7 1700 goes for as cheap as 50$, DDR4 ram at supported speeds (2133 MHz) is also dirt cheap.
chillingeffect 167 days ago [-]
>$1500/month
Wow, i just got into newpipe/fdroid. Its neat to think even a donation the size of mine can be almost individually meaningful :)
yonatan8070 167 days ago [-]
I have a home server with a 9th gen i7 that's doing jack sh!t most of the time, is there a way to donate some compute time to build F-Droid packages?
CJefferson 167 days ago [-]
The problem with offering fallbacks is testing -- there isn't any reasonable hardware which you could use, because as you say it's all very old and slow.
167 days ago [-]
pestatije 167 days ago [-]
I'm sure theyll appreciate your old desktop donation
karteum 167 days ago [-]
I don't fully understand: aren't gradle and aapt2 open-source ?
If you want to build buildroot or openwrt, the first thing it will do is compiling your own toolchain (rather than reusing the one from your distro) so that it can lead to predictable results. I would have the same rationale for f-droid : why not compile the whole toolchain from source rather than using a binary gradle/aapt2 that uses unsupported instructions?
I agree, this should be the case, but Gradle specifically relies on downloading prebuilt java libraries and such to build itself and anything you build with it, and sometimes these have prebuilt native code inside. Unlike buildroot and any linux distribution, there's no metadata to figure out how to build each library, and the process for them is different between each library (no standards like make, autotools and cmake), so building the gradle ecosystem from source is very tedious and difficult.
1oooqooq 167 days ago [-]
having worked with both mvn and gradle, i always have a good chuckle when i hear about npm "supply chain" hacks.
Still haven't. Currently, most of the devs aren't aware of this underlying issue!
micw 167 days ago [-]
As far as I can see, sse4.1 has been introduced in CPUs in 2011. That's more than 10 years ago. I wonder why such old servers are still in use. I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.
Does anyone know the numbers of build servers and the specs?
eadmund 167 days ago [-]
> I'd assume that a modern CPU would do the same amount of work with a fraction of energy so that it does not even make economical sense to run such outdated hardware.
There are 8,760 hours in a non-leap year. Electricity in the U.S. averages 12.53 cents per kilowatt hour[1]. A really power-hungry CPU running full-bore at 500 W for a year would thus use about $550 of electricity. Even if power consumption dropped by half, that’s only about 10% of the cost of a new computer, so the payoff date of an upgrade is ten years in the future (ignoring the cost of performing the upgrade, which is non-negligible — as is the risk).
And of course buying a new computer is a capital expense, while paying for electricity is an operating expense.
You can buy a mini pc for less than $550. For $200 on Amazon you can get an N97 based box with 12 GB RAM and 4 cores running at 3 GHz and a 500 GB SATA SSD. That’s got to be as fast as their current build systems and supports the required instructions.
officeplant 167 days ago [-]
Those single memory channel shitboxes aren't even fast enough to be usable during big windows updates let alone used in production.
wtallis 166 days ago [-]
One channel of DDR5-4800 actually competes pretty well against four channels of DDR3-1333 spread across two chiplets, which was the best Opteron configuration old enough to not have SSE4.1.
kasabali 165 days ago [-]
not to even mention the "cooling" solutions they have
1oooqooq 167 days ago [-]
if you don't understand bandwidths and how long componenets can run at the 80pctile before failure, you're out of your element in this discussion.
adrian_b 167 days ago [-]
It has been introduced in Intel Penryn, in November 2007.
However the AMD CPUs did not implement it until Bulldozer, in mid 2011.
While they lacked the many additional instructions provided by Bulldozer, also including AVX and FMA, for many applications the older Opteron CPUs were significantly faster than the Bulldozer-based CPUs, so there were few incentives for upgrading them, before the launch of AMD Epyc in mid 2017.
SSE 4.1 is a cut point in supporting old CPUs for many software packages, because older CPUs have a very high overhead for divergent computations (e.g. with if ... else ...) inside loops that are parallelized with SIMD instructions.
cjaackie 167 days ago [-]
I haven’t seen the real answer that I suspect here - the build servers are that one dual socket AMD board which runs open firmware and has no ME/PSP .
ffaser5gxlsll 167 days ago [-]
On the server side, probably not, but I'd like to point out that old hardware is not uncommon, and it's going to be more and more likely as time passes especially in the desktop space.
I was hit by this scenario in the 2000s with an old desktop pc I had, also in the 10ys range, I was using just for boring stuff and random browsing, which was old, but perfectly adequate for the purpose. With time programs got rebuilt with some version of SSE it didn't support. When even firefox switched to the new instruction set, I had to essentially trash a perfectly working desktop pc as it became useless for the purpose.
LukeShu 167 days ago [-]
I was going to say that I assume that the reason for such old CPUs is the ability to use Canoeboot/GNU Boot. But you absolutely can put an SSE4.2 CPU in a KGPE-D16 motherboard. So IDK.
whizzter 167 days ago [-]
Because setting up servers is an annoying piece of grunt-work that people avoid doing more than absolutely necessary, there's an reason the expensive options of AWS,Azure and Google cloud make money because much "just works" when focusing on applications rather than the infra (until you actually need to do something advanced and the obscure commands or clicking bites you in the ass).
heavyset_go 167 days ago [-]
Hardware after the first couple of generations of x86_64 muliticore processors are perfectly capable machines to use as servers, even for tasks you want to put off to a build farm.
Pyrodogg 167 days ago [-]
A few months ago Adobe finally updated Lightroom Classic to require these processor extensions. To squeeze all of the matrix mults it can for AI features also in CPU mode.
It's amazing how long of a run top end hardware from ~2011 has had (just missed the cutoff by a few months). It's taken this long for stuff to really require these features.
The Catima thread makes FDroid sound like a really difficult commmunity to work with. Although I'm basing this on one person's comment and other people agreeing, not on any knowledge or experience.
> But this is like everything with F-Droid: everything always falls on a deaf man's ears. So I would rather not waste more time talking to a brick wall. If I had the feeling it was possible to improve F-Droid by raising issues and trying to discuss how to solve them I wouldn't have left the project out of frustration after years of putting so much time and energy into it.
eptcyka 167 days ago [-]
F-droid are thoroughly understaffed and yet incredibly ambitious and shrewd around their goals - they want to build all the apps in a reproducible manner. There’s lots of friction around deviating from builds that fit within their model. The system is also slow, takes a long while before a build shows up. I think f-droid could benefit immensely from more funding, saying that as someone who has never seen f-droid’s side, but have worked on an app that was published there.
ohdeargodno 166 days ago [-]
There's a bunch of stupid behaviors all around (running AGP in alpha being one), but F-Droid asking maintainers to disable baseline profiles because it breaks reproductibility for them is thoroughly stupid and demanding.
typpilol 167 days ago [-]
I saw that too and was wondering what kind of drama happened in the past
noirscape 167 days ago [-]
Very unexciting stuff; it's just your typical long-running FOSS project issues as I understand it. Lead maintainer of F-Droid is entrenched in his ways "cuz it works for me", which leads to stonewalling any attempts to change or improve the F-Droid workflow[0], but since he holds the keys to the kingdom (and the name recognition prevents forks), they keep him around.
Everyone else then tries to work around him and through a mixture of emotional appealing, downplaying the importance of certain patches and doing everything in very tiny steps then try to improve things. It's an extremely mentally draining process that's prone to burnout on the part of the contributors, which eventually boils over and then some people quit... which might start a conversation on why nobody wants to contribute to the FOSS project. That conversation inevitably goes nowhere because the people you'd want to hold that conversation with are so fed up with how bad things have gotten that they'd rather just see the person causing trouble removed entirely. (Which may be the correct course of action, but this is an argument often given without putting forward a proper replacement/considering how the project might move forward without them. Some larger organizations can handle the removal of a core maintainer, most can't.) Rinse and repeat that cycle every five years or so.
F-Droid isn't at all unique in this regard, and most people are willing to ignore it "because it's free, you shouldn't have any expectations". Any long running FOSS project that has significant infrastructure behind it will at some point have this issue and most haven't had a great history at handling it, since the bus factor of a lot of major FOSS projects is still pretty much one point five people. (As in, one actual maintainer and one guy that knows what levers to pull to seize control if the maintainer actually gets hit by a bus, with the warning that they stop being 0.5 of a bus factor and become 0 if they do that while the maintainer is still around.)
This is the sort of stuff that makes me want to pursue FIRE. There's so much good that could be done, but isn't because people need to be making money for someone else.
Then again who is to say that I would be a better custodian than this guy?
chillingeffect 167 days ago [-]
I like your energy; and I like your awareness that more control/different center of power may not help. This is where community-oriented leadership techniques could go a long way. To build trust, maintain peoples' roles and dignity, but to increase that awareness and enable floodlight focus (big picture) in addition to flashlight focus.
yjftsjthsd-h 167 days ago [-]
> Google’s new aapt2 binary in AGP 8.12.0
Given F-Droid's emphasis on isolating and protecting their build environment, I'm kind of surprised that they're just using upstream binaries and not building from source.
jraph 167 days ago [-]
Relatedly, we don't really have any up to date free software build of the Android SDK AFAIK. To build Android apps, we all rely on the Google binaries, which are non-free.
It seems quite implausible that F-Droid is actually running on hardware that predates those instruction set extensions. They're seeing wider adoption by default these days precisely because hardware which doesn't support them is getting very rare, especially in servers still in production use. Are you sure this isn't simply a matter of F-Droid using VMs that are configured to not expose those instructions as supported?
167 days ago [-]
roywashere 167 days ago [-]
This is sort of like a bug I hit last year when the mysql docker container suddenly started requiring x86-64-v2 after a patch level upgrade and failed to start: https://github.com/docker-library/mysql/issues/1055
yyyk 167 days ago [-]
Their servers are so old, even an entirely different architecture emulating x86_64 would still see a performance increase... So there's no OSS argument here - they could even buy a Talos, have no closed firmware, and still see a performance increase with emulation. If they don't care about the firmware, there are plenty of very cheap x86 options which are still more modern.
> For all those following this, we have the budget to buy new hardware, what we lack is a skilled hoster who has committed to physically hosting a new bare metal box for us.
DrewADesign 167 days ago [-]
> Their servers are so old
When I read this, pop culture has trained me to expect an insult, like: “Their servers are so old, they sat next to Ben Franklin in kindergarten.”
queenkjuul 166 days ago [-]
My home server is so old, it gets its driver's license next year
flykespice 167 days ago [-]
> The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support. This is similar to a 2021 AGP 4.1.0 issue, but it has returned, and now affects hundreds of apps.
I don't know why they have enabled modern CPU flags for a simple intermediary tool that compiles the apk resources files, it was so unneccesary
Welp there goes my plans on savaging an old laptop to build my android apps.
If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd.
maxloh 167 days ago [-]
There is no point for Google to push planned obsolescence on the PC or server space. They don't have a market there.
userbinator 167 days ago [-]
It does benefit them to make it harder for competitors.
maxloh 167 days ago [-]
When you mention "competitors," what industries or markets are you referring to?
No one would write Android apps on a Chromebook, and making it harder to do so would only reduce the incentive for companies to develop Android apps.
How could Google benefit from pushing a newer instruction set standard on Windows and macOS?
heavyset_go 167 days ago [-]
The one moderately popular competitor is the project in the OP that is suffering directly from this upstream change.
jeroenhd 167 days ago [-]
I doubt Google even cares about F-Droid. The Play Store competes with the iOS App Store, Huawei's App Galery, and probably the Samsung Store long before F-Droid becomes relevant.
If they required a Google-specific Linux distro to build this thing or if they went the Apple route and added closed-source components to the build system, this could be seen as a move to mess with the competition, but this is simply a developer assuming that most people compiling apps have a CPU that was produced less than 15 years ago (and that the rest can just recompile the toolchain themselves if they like running old hardware).
With Red Hat and Oracle moving to SSE4.1 by default, the F-Droid people will run into more and more issues if they don't upgrade their old hardware.
ohdeargodno 166 days ago [-]
F-Droid is so insignificantly small their entire userbase is smaller than the amount of users of each app in the top 500 of every single large store (Play, Galaxy, Huawei, etc.)
This happened because nobody gives a shit about F-Droid, not because it's somehow a "threat" with unmaintained apps.
maxloh 167 days ago [-]
While your perspective makes some sense, it's highly improbable. It's unlikely that Google was aware of F-Droid's infrastructure specs, or its inability to fix the issue in advance.
It seems you're suggesting a very specific, targeted attack.
fsflover 167 days ago [-]
> It seems you're suggesting a very specific, targeted attack.
Former Chrome team member here. Nightingale's suspicions were plausible but incorrect. The primary cause of every one of these we looked into over the years (and there were indeed many) was teams not bothering to test against Firefox because its market share was low compared to the cost of testing for it. In many cases teams tried to reduce support burden by simply marking "unsupported" any browser they didn't explicitly test, which was sometimes just Chrome and Safari. We were distressed at this and wrote internal guidance around not doing things like the above, and tried to distribute it and point back to it frequently. Unfortunately Firefox' share continued to go down, engineering teams continued to be resource-constrained, and the problem continued to occur.
Several years ago I glumly opined internally that Firefox had two grim choices: abandon Gecko for Chromium, or give up any hope of being a meaningful player in the market. I am well aware that many folks (especially here) would consider the first of those choices worse than the second. It's moot now, because they chose the second, and Firefox has indeed ceased to be meaningful in the market. They may cease to exist entirely in the next five years.
I am genuinely unhappy about this. I was hired at Google specifically to work on Firefox. I was always, and still remain, a fan of Firefox. But all things pass. Chrome too will cease to exist some day.
fsflover 166 days ago [-]
Thank you for the interesting insight. This is more or less what I expected.
> suspicions were plausible but incorrect
The suspicions were not about the evil will of the engineers. It's the will of Google itself (or managers, if you want), which plays the main role here. This is exactly what causes the following:
> engineering teams continued to be resource-constrained
The resource constraints had nothing to do with intentionally not funding "support a competing browser properly", though, and everything to do with just not funding engineering and test work at all except to build Shiny Idea That Got A Promo.
Despite its size, Google does shoestring engineering of most things, which is why so much is deprecated over time -- there's never budget for maintenance.
So I mean in some sense yes, there's valid criticism of Google's "will" here, but that will was largely unaware of Firefox, and the consequences burned Google products and customers just as much or more in the long run. Nightingale looked past individual instances to see a pattern, but didn't continue to scale the pattern up to first-party products as well.
tonyhart7 167 days ago [-]
"If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd"
The idea that not supporting a 20+ year old system is "planned obsolescence" is a bit shallow
WesolyKubeczek 167 days ago [-]
But you don't, so you won't, scoring one for the planned obsolescence crowd.
And so won't anyone else who has time to complain about planned obsolescence, and that includes myself.
msgodel 167 days ago [-]
The Win95 API is pretty incomplete. That was actually a terrible OS. The oldest I'd go playing this game with anything serious is probably XP.
jeroenhd 167 days ago [-]
It can read files, write files, and allocate memory. Is there anything else you need to compile software?
msgodel 167 days ago [-]
Can it? Files on Windows 95 and files on most Unix-like OSes are very different things.
userbinator 166 days ago [-]
They're the same from the perspective of a stream of persistent bytes.
If you want "very different" then look at the record-based filesystems used in mainframes.
CodesInChaos 166 days ago [-]
Do you have any recommended reading about record-based filesystems?
jve 167 days ago [-]
Like it is a one-off thing to support some system. You must maintain it and account it for all the features you bring in going forward.
tetris11 167 days ago [-]
I'm a bit lost in this thread, but I've written up what I know for other dummies like me
Aapt2 is an x86_64 standalone binary used to build android APKs for various CPU targets
Previous versions of it used a simpler instruction set, but the new version requires an extra SIMD instruction SSE4. A lot of CPUs after 2008 support this, but not F-droid's current server farm?
its-summertime 167 days ago [-]
> Our machines run older server grade CPUs
So a bit of both of older hardware, and not-matched-with-consumer-featureset hardware. I'd imagine some server hardware vendors supported SSE4 way earlier than most, and some probably supported it way later than most too.
wpm 167 days ago [-]
I’ve got an old Ivy Bridge-EP Dell workstation they can borrow goddamn SSE4.1 is nearly old enough to drink.
jeroenhd 167 days ago [-]
SSE4.1 can legally buy lightly alcoholic beverages in various European countries already. Next year, it can buy strong spirits.
Using AMD hardware that's "only" 13 years old can also cause this problem, though.
rpcope1 167 days ago [-]
Yeah I was kind of shocked too. Core 2 could do both of those instruction sets. A used Dell Precision can be had for very little and probably would be grossly more efficient than whatever they're using.
edgan 167 days ago [-]
That F-Droid even requires to do the build is one of the reasons I created Discoverium.
That F-Droid requires to do the build ensures all apps provided by F-Droid are free software (as in freedom) and proven to be buildable by someone other than the app developer
mschuster91 167 days ago [-]
> and proven to be buildable by someone other than the app developer
Yup. That's a huge, huge issue - IME especially once Java enters the scene. Developers have all sorts of weird stuff in their global ~/.m2/settings.xml that they set up a decade ago and probably don't even think about... real fun when they hand over the project to someone else.
edgan 167 days ago [-]
The issue is more complicated than that.
twodave 167 days ago [-]
Do you mean the overall issue or that F-Droid’s guarantees are arguable? The guarantees may not be the whole discussion, but for many they are the most relevant piece.
Edit: or perhaps you mean that isn’t the only way to provide such guarantees, which is the implication I got reading your other replies.
yjftsjthsd-h 167 days ago [-]
How so?
devrandoom 167 days ago [-]
So I should take a binary from a random stranger because trust me bro?
edgan 167 days ago [-]
It is a modified version of Obtainium. You get it from the author via GitHub.
devrandoom 162 days ago [-]
It's still a binary from a stranger. You don't know from which source it was built.
eighthave 165 days ago [-]
The limiting factor for upgrading our buildserver is finding a trusted, skilled sysadmin to physically install, setup and maintain new hardware at the high level of security that is needed for a release buildserver for a project like F-Droid. It also needs to be in a trusted physical location. Hetzner is definitely not that.
CommenterPerson 167 days ago [-]
Non-hacker here. The title says "modern". I don't need modern, have a 10 year old phone, can I still get the occasional simple app from F-Droid?
I upped my (small) monthly contribution. Hope more people contribute, and also work to build public support.
Also, for developers .. please include old fashioned credit cards as a payment method. I'd like to contribute but don't want to sign up for yet another payment method.
exabrial 167 days ago [-]
Man, Android could have been way cooler if it actually used real virtual machines, or at least the JVMs.
pjmlp 167 days ago [-]
I stood by Oracle, because in the long term as it has been proven, Android is Google's J++, and Kotlin became Google's C#.
Hardly any different from what was in the genesis of .NET.
Nowadays they support up to Java 17 LTS, a subset only as usual, mostly because Android was being left behind accessing the Java ecosystem on Maven central.
And even though now ART is updatable via PlayStore, all the way down to Android 12, they see no need to move beyond Java 17 subset, until most likely they start again missing on key libraries that decided to adopt newer features.
Also stuff like Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART.
At least, they managed to push into mainstream the closest idea of OSes like Oberon, Inferno, Java OS and co, where regardless of what think about the superiotity of UNIX clones, here they have to contend themselves with a managed userspace, something that Microsoft failed at with Longhorn, Singularity and Midori due to their internal politics.
exabrial 163 days ago [-]
>Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART
This is pretty sad IMHO, as Java17 was a true turning point. Java21 is icing and Java25 is an incredible refinement with some fascinating new features that are really well thought out.
aembleton 167 days ago [-]
> Kotlin became Google's C#
Are Google buying Jetbrains?
pjmlp 167 days ago [-]
They almost could, after all they have outsourced most of the Android tooling efforts to JetBrains, given that Android Studio is mostly InteliJ + Clion, and Kotlin is the main Android language nowadays.
Also Kotlin Foundation is mostly JetBrains and Google employees.
jeroenhd 167 days ago [-]
ARM phones didn't have virtualisation back in the day so that would've been impossible.
I don't think virtualization CPU support is needed for a JVM to run efficiently (though it could help with process isolation). At the end of the day the JVM is mostly a compiler!
tonyhart7 167 days ago [-]
JVM??? hell no, native FTW
exabrial 166 days ago [-]
I think thats part of the problem. The JVM rarely runs interpreted code; nearly everything is compiled to native code.
trenchpilgrim 167 days ago [-]
I thought SSE 4.1 dates back to 2008 or so?
starkparker 167 days ago [-]
The build servers appear to be AMD Opteron G3s, which only support part of SSE4 (SSE4a). Full SSE4 support didn't land until Bulldozer (late 2011).
karlgkk 167 days ago [-]
I appreciate that this is a volunteer project, but my back of the hand math suggests that if they upgraded to a $300 laptop using a 10nm intel chip, it would pay for itself in power usage within a few years. Actually, probably less, considering an i3-N305 has more cores and substantially faster single thread.
And yes, you could get that cost down easily.
wtallis 167 days ago [-]
Yes, a used laptop would be an upgrade from server hardware of that vintage, in performance and probably in reliability. If they're really using hardware that old, that is itself a big red flag that F-Droid's infrastructure is fragile and unmaintained.
(A server that old might not have any SSDs, which would be insane for a software build server unless it was doing everything in RAM.)
johnklos 167 days ago [-]
How is it that if hardware is old, that means it's unmaintained, or that if it's old, it can't have SSDs? Neither of those things are typically inferred from age.
I still maintain old servers, and even my Amiga server has an SSD.
wtallis 167 days ago [-]
If they're running hardware that old, and it's causing them software compatibility problems, then we can infer that their infrastructure is unmaintained, because the cost of moving to newer hardware is so low that the cost of newer hardware could not plausibly be the reason they haven't moved to new hardware. There's dirt cheap used server hardware that would be substantially faster, cheaper to operate, and not have software compatibility issues like this. Money can't be preventing them from using newer hardware.
We don't know for sure the servers don't have SSDs, but we do know that back in the days of server hardware that didn't support SSE4.1, SSDs had not yet displaced hard drives for mainstream storage, so it's likely that servers that old didn't originally ship with SSDs. It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
A server at that age is also going to be harder to repair when something dies, and it's due for something to die. If they lose a PSU it might be cheaper to replace the whole system with something a bit less old. Other components they'd have to rely on replacing with something used, from a different manufacturer than the original, or use a newer generation component and hope it's backwards compatible. Hence why I said using hardware that old would imply their infrastructure is fragile.
But all of this is still just speculation because nobody involved with F-Droid has actually explained what specific hardware they're using, or why. So I'm still not convinced that the possibility of a misconfigured hypervisor has been ruled out.
johnklos 166 days ago [-]
> If they're running hardware that old [...] then we can infer that their infrastructure is unmaintained
You lost me there. One thing has nothing to do with the other.
People have reasons for running the hardware they run. Do you know their reasons? If you do, please share. If not, there's no connection whatsoever between old hardware and unmaintained infrastructure.
Is my AlphaServer DS25 unmaintained? It's very old server hardware.
Is my 1981 Chevette unmantained? It's REALLY old. Can you infer that the fact that I have a car from 1981 means it's unmaintained? I'd say that reasonable people can infer that it's definitely maintained, since it would most likely not still be running if it weren't.
> It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
I don't know where you learned about servers, but no, it's not a weird choice to use newer storage in older servers. Not at all. Not even a little bit. Maybe you've worked somewhere that bought Dell servers with storage and trashed the servers when storage needing upgrading, but that's definitely not normal.
wtallis 166 days ago [-]
> If not, there's no connection whatsoever between old hardware and unmaintained infrastructure.
See, this is just you being unreasonable.
Yes, we can all imagine why people might keep old hardware around. But your AlphaServer is at best your hobby, not production infrastructure that lots of people and other projects rely on. Nobody's noticing whether or not it crashes. Likewise for your Chevette: nobody cares until it stalls out in traffic, then everyone around you will make the reasonable assumption that it's behind on maintenance.
If F-Droid is indeed using ancient hardware, and repeatedly experiencing software failures as a result, then the most likely explanation is that their infrastructure is inadequately maintained. Sure, it's not a guarantee, it's not the only possibility, but it's a reasonable assumption to work with until such time as someone from F-Droid explains what the hell is going on over there. And if there's nobody available to explain what their infrastructure is and why it is showing symptoms of being old and unmaintained, that's more evidence for this hypothesis.
eimrine 167 days ago [-]
There are some more possible virtues except of performance and probably-reliability.
trenchpilgrim 167 days ago [-]
I have computers from the early 2000s that now have SSDs in them. You can get cheap adapters to use SATA and CompactFlash storage on old machines.
theandrewbailey 167 days ago [-]
I work in the refurb division of an ewaste recycling company[0]. $300 will get you a very nice used Thinkpad or Dell Latitude. They might even get by with some ~$50 mini desktops.
It will have Intel ME which makes the whole open-source ideology... compromised?
johnklos 167 days ago [-]
If they're relying on binaries from Google, then it's already compromised.
karlgkk 167 days ago [-]
there are a handful of vendors that will sell you an intel chip with the me disabled, as well as arm vendors that ship boards without an me-equivalent at all
the point of my post still stands
eimrine 167 days ago [-]
Do I need to be the US Military for that?
Intel ME is not a feature for user, it is intended to control any modern CPU except the ones coming to US Army/Navy. It is needed to make Stuxnet-class attacks. The latest chip with possibiliy to have the ME provenly disabled is the 3rd gen.
karlgkk 165 days ago [-]
Purism sells a Comet Lake box with the ME disabled (or so they say).
Many ARM vendors sell powerful arm computers without any ME-analog on board.
> It is needed to make Stuxnet-class attacks.
I have issues with the presence of the ME and I think we agree on a lot of things, but this statement is lunacy lol
eimrine 164 days ago [-]
So, there is no any fresh x86 processor without that shite. I have no devices with ARM so I have nothing to say about the latter.
tmtvl 167 days ago [-]
Someone send these people a Slimbook.
mrheosuper 167 days ago [-]
it's insane, i would give them my old xeon haswell machine for free, but the shipping cost is likely more than the cost of the machine itself.
nativeforks 167 days ago [-]
Yes, SSE4.1 and SSSE3 have been introduced in ~2006. The F-Droid build server still uses that to build modern and some of the most popular FOSS apps.
SylvieLorxu 166 days ago [-]
Might be worth noting that several devs have suggested users use IzzyOnDroid instead. Due to IzzyOnDroid distributing official upstream builds (after scanning), they're not dependent on any build server.
Although they do have build servers for the purpose of confirming upstream APKs match the source code using reproducible builds, but those are separate processes that don't block each other (unlike F-Droid's rather monolithic structure).
IzzyOnDroid has been faster with updates than F-Droid for years, releasing app updates within 24 hours for most cases.
> For all those following this, we have the budget to buy new hardware, what we lack is a skilled hoster who has committed to physically hosting a new bare metal box for us.
Arech 167 days ago [-]
This is super annoying how SW vendors forcefully deprecate good enough hardware.
Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.
crote 167 days ago [-]
The problem is that your "good enough" is someone else's "woefully inadequate", and sticking to the old feature sets is going to make the software horribly inefficient - or just plain unusable.
I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.
At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?
Arech 167 days ago [-]
Ah, com'on, spare me from these strawman arguments. Good enought is good enough. If F-Droid wasn't worried about that, you definitely have no reasons to do that for them.
"A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...
bluGill 167 days ago [-]
But it isn't good enough. SIMD provides measurable improvements to some people's code. To those people what we had before isn't good enough. Sure for the majority SIMD provides no noticeable benefit and so what we had before is good enough, but that isn't everybody.
johnklos 166 days ago [-]
Are you SURE that nobody has figured out how to have code that uses SIMD if you have it, and not use it if you don't?
Your suggestion falls flat on its face when you look at software where performance REALLY matters: ffmpeg. Guess what? It'll use SIMD, but can compile and run just fine without.
I don't understand people who make things up when it comes to telling others why something shouldn't be done. What's it to you?
pabs3 166 days ago [-]
It definitely is, you can even do that automatically with SIMDe and runtime function selection.
ffmpeg is a bad example, because it's the kind of project that has lots of infrastructure around incorporating hand-optimized routines with inline assembly or SIMD intrinsics, and runtime detection to dispatch to different optimized code paths. That's not something you can get for free on any C/C++ code base; function multiversioning needs to be explicitly configured per function. By contrast, simply compiling with a newer instruction set permits the compiler's autovectorization use newer instructions whenever and wherever it finds an opportunity.
sparkie 167 days ago [-]
OTOH, if software wants to take advantage of modern features, it becomes hell to maintain if you have to have flags for every possible feature supported by CPUID. It's also unreasonable to expect maintainers to package dozens of builds for software that is unlikely to be used.
There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.
In most cases (and this was the case of Mozilla I referred to) it's only a matter of compiling code that already have all support necessary. They are using some upstream component that works perfectly fine on my architecture. They just decided to drop it, because they could.
sparkie 167 days ago [-]
It's not only your own software, but also its dependencies. The link above is for glibc, and is specifically addressing incompatibliy issues between different software. Unless you are going to compile your own glibc (for example, doing Linux From Scratch), you're going to depend on features shipped by someone else. In this case that means either baseline, with no SIMD support at all, or level A, which includes SSE4.1. It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.
johnklos 166 days ago [-]
> It makes no sense for developers to keep maintaining software for 20 year old CPUs when they can't test it.
This is horribly inaccurate. You can compile software for 20 year old CPUs and run that software on a modern CPU. You can run that software inside of qemu.
FYI, there are plenty of methods of selecting code at run time, too.
If we take what you're saying at face value, then we should give up on portable software, because nobody can possibly test code on all those non-x86 and/or non-modern processors. A bit ridiculous, don't you think?
sparkie 166 days ago [-]
> You can compile software for 20 year old CPUs and run that software on a modern CPU.
That's testing it on the new CPU, not the old one.
> You can run that software inside of qemu.
Sure you can. Go ahead. Why should the maintainer be expected to do that?
> A bit ridiculous, don't you think?
Not at all. It's ridiculous to expect a software developer to give any significance to compatibility with obsolete platforms. I'm not saying we shouldn't try. x86 has good backward compatibility. If it still works, that's good.
But if I implement an algorithm in AVX2, should I also be expected to implement a slower version of the same algorithm using SSE3 so that a 20 year old machine can run my software?
You can always run an old version of the software, and you can always do the work yourself to backport it. It's not my job as a software developer to be concerned about ancient hardware unless someone pays me specifically for that.
Would you expect Microsoft to ship Windows 12 with baseline compatibility? I don't know if it is, but I'm pretty certain that if you tried running it on a 2005 CPU, it would be pretty much non-functional, as performance would be dire. I doubt it is anyway due to UEFI requirements which wouldn't be present on a machine running such CPU.
johnklos 162 days ago [-]
> Would you expect Microsoft to ship Windows 12
There's the issue. You think that Windows is normal and an example of stuff that's relevant to open source software.
If people write AVX-512 and don't want to target anything else, then fine. But then it's simply not portable software.
Software that's supposed to be portable should be, you know, portable.
The implication is that you can decide to not support 20 year old CPUs and still have portable software. People who think that are just ignorant because if software is portable, it'll work on 20 year old CPUs. The "20 year old CPUs" part is a red herring, since it has nothing to do with anything aside from the fact that portable software will also run on 20 year old CPUs as well as different CPUs.
As an aside, instead of making up excuses for bad programmers, you might be interested to learn that software compiled with optimizations for newer amd64 didn't show any significant improvement over software compiled for all amd64.
Also, you have things backwards: code written and compiled today and run on 2005 CPUs wouldn't be "pretty much non-functional, as performance would be dire" unless you're talking about Windows. This is a problem with programmers and with Windows, and targetting the newest "features" of amd64 doesn't fix that. Those things aren't even related.
It's interesting how many people who either don't understand programming or who intentionally pretend not to want to make excuses for software like Windows.
yjftsjthsd-h 167 days ago [-]
> Unless you are going to compile your own glibc (for example, doing Linux From Scratch),
It's not that hard to use gentoo.
RealStickman_ 166 days ago [-]
The F-Drois builds have been slow for years and with how old their servers apparently are that isn't even surprising in retrospective.
kijin 167 days ago [-]
Requiring (supposedly) universally available CPU instructions is one thing. Starting to require it in a minor version update (8.11.1 -> 8.12.0) is a whole different thing. What the heck happened to semantic versioning? We can't even trust patch updates anymore these days. The version numbers might as well be git commit IDs.
o11c 167 days ago [-]
Note: the underlying blame here fundamentally belongs to whoever built AGP / Gradle with non-universal flags, then distributed it.
It's fine to ship binaries with hard-coded cpu flag requirements if you control the universe, but otherwise not, especially if you are in an ecosystem where you make it hard for users to rebuild everything from source.
IshKebab 167 days ago [-]
Exactly. Everything should be compiled to target i386.
/s (should be obvious but probably not for this audience)
pabs3 166 days ago [-]
They should be compiled for the CPU baseline of the ABI they are using, and check if newer instructions are available before using them. This is what Debian does, so they can have maximum hardware support.
Why? There's nothing wrong with having minimum requirements beyond that. They don't have to use Debian's policy, and multiversioning adds enough complexity that basically nobody does it (I've only ever seen it used in video codecs).
userbinator 167 days ago [-]
control the universe
Guess what the company behind Android wants to do...
rasz 166 days ago [-]
> (SSE4.1, SSSE3)
This means their build infrastructure burns excessive amounts of power being run by volunteers in basements/homelabs on vintage museum grade (15 year old Opterons/Phenoms) hardware.
Gamers have been there 14 years ago with 'No Man's Sky' being the first big game requiring SSE 4.1 for no particular reason.
andix 167 days ago [-]
Do I get it correctly, that they run their build infrastructure on at least 15 year old hardware?
1vuio0pswjnm7 166 days ago [-]
Perhaps there should be more than one F-Droid
For example, if they published their exact setup for building Android apps so others could replicate it
How many Android users compile the own apps they use
Perhaps increasing that number would be a goal worth pursuing
167 days ago [-]
nativeforks 167 days ago [-]
There are even some "Unknown problem" on IzzyOnDroid repo for app publishing, even ensuring reproducible build, izzy says >>Not necessarily "your fault" – baseline often has such issues: https://github.com/CompassMB/MBCompass/issues/90
Seems like he is talking about the developer being responsible for that also!
SylvieLorxu 166 days ago [-]
IzzyOnDroid can publish updates even if it's not reproducible, this is not an "app publishing" issue at all. IzzyOnDroid can deal with AGP 8.12 fine.
Also "not necessarily your fault" means "probably not your fault", the opposite of "your fault"
pabs3 166 days ago [-]
Google should be compiling for the CPU baseline of the ABI their binaries are for, and then check if newer instructions are available before using them. Just like glibc and other projects do. The Debian documentation for this mentions tools to do this, like SIMDe and GCC/clang FMV.
Am I missing something, or does SIMDe only help for cases where a program is using instruction intrinsics, and it doesn't do anything to address cases where the compiler decides to use SIMD as a result of auto-vectorization?
pabs3 166 days ago [-]
Thats correct, but usually compilers don't do that if you use the CPU baseline.
wtallis 166 days ago [-]
> but usually compilers don't do that if you use the CPU baseline
That's a problem that people are trying to solve by not using an ancient CPU baseline. Do you have a reasonable proposal for how else we should enable widespread use of hardware functionality that's less than two decades old?
pabs3 165 days ago [-]
The compiler could auto-enable function multi-versioning (FMV) for functions where auto-vectorisation gets triggered. At program start, FMV checks which instructions are available and updates function pointers to the right functions. Things like glibc use FMV to switch things like memcpy to SIMD-optimised versions.
fancythat 167 days ago [-]
I don't know how much servers are they using or server specs besides ancient Opterons, but how is this even an issue in 2025?
On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.
What are we missing here, besides that build farm was left to decay?
WesolyKubeczek 167 days ago [-]
Either they want to run on ideologically pure hardware too, without pesky management bits in it (or even indeed UEFI), or they are just "it used to work perfectly" guys.
In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.
fancythat 167 days ago [-]
I agree with you. Unfortunately usually, the simplest explanation is often the truth, so they just probably ignored this issue, until it surfaced up.
WesolyKubeczek 167 days ago [-]
In other words,
> they are just "it used to work perfectly" guys.
bill_mcgonigle 167 days ago [-]
Well if you wanted to compromise F-Droid you could target their build server's ME or a cloud vm's hypervisor.
To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.
The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.
There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.
F-Droid likely has upgrade options even in the all-open scenario.
bluGill 167 days ago [-]
QEMU static on linux supports automatic emulating of missing instructions. Depending on details that I haven't figured out it can be a lot slower running this way or close enough to native. I have got that working, but it was a pain and I don't remember what was needed (most of the work was done by someone else, but I helped)
167 days ago [-]
doppelgunner 164 days ago [-]
Imagine explaining to your app that it cannot compile because the server thinks Snapdragon 800 is still the future. F-Droid is basically the grandparent who insists their flip phone works just fine.
167 days ago [-]
167 days ago [-]
shrubble 167 days ago [-]
Is it the CPUs or the compilers? Or possibly a CI/CD runner that has to run something that can’t run on these CPUs?
jdbdnxjdhe 167 days ago [-]
I don't get the issue, binary target is completely independent from host target on all but the most basic setups
nicman23 167 days ago [-]
wtf they cannot be still running opterons. it was to be that they are using qemu with g3 as a cpu profile.. right?
shmerl 167 days ago [-]
Can't cross compilation help for that? The CPU compiling doesn't need to match the target.
a99c43f2d565504 167 days ago [-]
It's not the target that is now requiring new instructions, but one of the components in the build tools.
shmerl 167 days ago [-]
I see.
mandown2308 167 days ago [-]
On the other hand, we have "personal" data centers for AI and mining farms for crypto.
167 days ago [-]
167 days ago [-]
OldfieldFund 167 days ago [-]
I think this might give Google some ideas...
1970-01-01 167 days ago [-]
Put another way, Google is requiring you to have 65nm Intel chips. 2009-ish.
nicman23 166 days ago [-]
now that i think of it, is this because they want to run without blobs and without ME/ PSP ?
skyzouwdev 167 days ago [-]
That’s a tough one. It’s ironic that the very platform meant to keep apps open and accessible is now bottlenecked by outdated hardware.
Upgrading the build farm CPUs seems like the obvious fix, but I’m guessing funding and coordination make it less straightforward. In the meantime, forcing devs to downgrade AGP or strip baseline profiles just to ship feels like a pretty big friction point.
Long term, I wonder if F-Droid could offer an optional “modern build lane” with newer hardware, even if it means fewer guarantees of full reproducibility at first. That might at least keep apps from stalling out entirely.
1970-01-01 167 days ago [-]
I've said this before, but I'll say it again. Running on donations is not a viable strategy for any long-term goal. FOSS needs to passively invest the donations. That is a viable long-term strategy. Now when things like this happen, it becomes a major line item moment, and not a limp-along situation, with yet another WE NEED YOUR HELP banner blocking off 1/2 their website.
solodolo 167 days ago [-]
[dead]
solodolo 167 days ago [-]
[dead]
hulitu 167 days ago [-]
> The root cause: Google’s new aapt2 binary in AGP 8.12.0 started requiring CPU instructions (SSE4.1, SSSE3) that F-Droid’s build farm hardware doesn’t support.
Very intelligent move from Google. Now you can't compile "Hello World" without SSE4.1, SSSE3. /s
Are there any X86 tablets with Android ?
vardump 167 days ago [-]
There are very few 17+ years old build servers at this point. Or laptops and desktops for that matter.
do_not_redeem 167 days ago [-]
[flagged]
yjftsjthsd-h 167 days ago [-]
Half the point is that I trust this middleman more than the app devs. When app developers turn evil (
https://news.ycombinator.com/item?id=38505229
), I explicitly want someone reviewing things and blocking software that works against my interests before it gets to me.
noirscape 167 days ago [-]
Obtainium assumes that the app developer is a trustworthy entity, when the reality behind the mobile ecosystem being as fucked up as it is primarily comes from the app developer. (Due to bad incentives made by mobile platform makers, mainly Apple.)
You need a middleman in place in case the app developer goes bad.
qart 167 days ago [-]
I have it installed. But the only thing I get updates for is Obtainium itself. There's no catalogue of apps, so I haven't installed anything via Obtainium.
They put the disclaimer on top that this list is not meant as an app store or catalog. It's meant for apps with somewhat complex requirements for adding to Obtainium. But it serves well as a catalog since most of the major open source apps are listed.
user070223 167 days ago [-]
Try Discoverium
em-bee 167 days ago [-]
this seems to be a general app finder and tracker. useful, but entirely different from what f-droid does, namely verify that apps are actually Free Software or Open Source and buildable from source.
167 days ago [-]
oguz-ismail 167 days ago [-]
How is this not another middleman (with a political banner in its README no less)?
prmoustache 167 days ago [-]
At this point it is not political, the banner mention a fact and a tragedy and link for donations to reputable NGOs.
teekert 167 days ago [-]
I know this is off-topic, as is this whole sub-thread by now. But is there a way to read the news as the Israelis do? I sometimes read rt.com (even though I need a vpn for that, somehow my government feels I'm not allowed to study this??), it helps me understand how Russian media presents news to their citizens. Is there anything like that for Israeli news?
Our Dutch news (and I think most EU news) is pretty much presenting us with the view that Israel has lost it (stories about young men searching for food being shot in the genitals for fun and such [0]), so I'm very curious how their government presents things to its civilians.
Would you prefer English content? You could try ynetnews.com, which I believe is translated from Ynet's Hebrew articles, for a very mainstream Israeli source.
There are also fully English sources like Times of Israel, though though it has sort of an international audience, not only Israelis.
teekert 166 days ago [-]
Thanx! (Yes English is best!)
user070223 167 days ago [-]
I think it acts more as an rss feed reader rather than building and hosting apps on it's own.
spacemule 167 days ago [-]
[flagged]
graemep 167 days ago [-]
Not sure what you found, but some of the "interesting links" on his website suggest a conspiracy theorist.
npoc 167 days ago [-]
If I was the state, conspiring against the people, the first thing I'd do would be to program the masses to ridicule the intelligent ones who spot the signs and theorise about a conspiracy - I'd teach the masses to point and laugh at wacky "conspiracy theorists"
graemep 167 days ago [-]
What I would do is spread obviously ridiculous theories in order to distract attention from the real problem.
npoc 166 days ago [-]
"Obviously ridiculous" - this often depends on whether you are aware of certain pretexts or not.
For example, whether or not you're aware that the banking system is collecting interest on all the money in the world, every second of the day, and it created it all out of thin air.
flykespice 167 days ago [-]
> If he actually believed what he said
Believe at what? A fact that is being actively documented in Gaza by NGOs and corroborated by numerous news agencies internationally?
This is all comming across as dishonest (specially when looking at your own homepage)
Sent1n3l 167 days ago [-]
[flagged]
AshamedCaptain 167 days ago [-]
A shitton of people, not to mention including all F-Droid users, would take FOSS ideology over new fangled bloated "non-decrepit" development tools _any day_.
But in any case, this is false dichotomy, and likely exaggerated one to begin with.
mid-kid 167 days ago [-]
I think it's extremely useful to have more strict requirements on how programs are built, to make sure that developers don't do stupid things that makes code harder for others to compile.
The tools in question in OP should be easy to build from source and not rely on the host's architecture, to be usable on platforms like ARM and RISCV. It's clear that in the android ecosystem, people don't care, so F-Droid can't do miracles (the java/gradle ecosystem is just really bad at this), but this would not happen if the build tools had proper build recipes themselves.
guappa 167 days ago [-]
As a user, i'm glad when devs use old tools so that my battery has a chance of lasting the whole day and my apps don't take 10 seconds just to open.
cnst 167 days ago [-]
> As a user, i'm glad when devs use old tools so that my battery has a chance of lasting the whole day and my apps don't take 10 seconds just to open.
Yup, same here! The story is as old as time, and the examples are plentiful. First Slashdot, then Reddit, then now GitHub, all became far-far-far slower and less usable, once they've been "improved" by the folk engaging in the resume-driven development:
I am, too, as a user, quite pleased that F-Droid is keeping it cool and reliable for the actual users.
guappa 166 days ago [-]
On github besides the slowness the number of clicks is increasing! I now have to click a "..." thing that opens a menu that only has 1 item in it to see the test build. And of course that (proprietary) tool follows the trend so I need another number of clicks to finally get to the logs and see what failed.
BoredPositron 167 days ago [-]
>> This has led to multiple “maintenance” versions in a short time, confusing users and wasting developer time, just to work around infrastructure issues outside the developer’s control.
What an entitled conclusion.
Rendered at 06:41:08 GMT+0000 (Coordinated Universal Time) with Vercel.
https://developers.redhat.com/blog/2021/01/05/building-red-h...
Think of how much faster their servers would be with one of those Epyc consumer cpus.
I was about to ask people to donate, but they have $80k in their coffers. I realize their budget is only $17,000 a year, but I am curious why they haven't spent $2-3k on one of those Zen4 or Zen5 matx consumer Epyc servers as they are around under $2k under budget. If they have a fleet of these old servers I imagine a Zen5 one can replace at least a few of them and consume far less power and space.
https://opencollective.com/f-droid#category-BUDGET
Not sure if this includes their Librapay donations either:
https://liberapay.com/F-Droid-Data/donate
This is not always a given. In our virtualization platform, we have upgraded a vendor supplied VM recently, and while it booted, some of the services on it failed to start despite exposing a x86_64v2 + AES CPU to the said VM. Minimum requirements cited "Pentium and Celeron", so it was more than enough.
It turned out that one of the services used a single instruction added in a v3 or v4 CPU, and failed to start. We changed the exposed CPU and things have returned to normal.
So, their servers might be capable and misconfigured, or the binary might require more that what it states, or something else.
On the other hand, I didn't dig very deep into the ticket history now but it sounds like this could have been expected: it broke once already 4 years ago (2021), so maybe planning an upgrade for when this happens again would be good foresight. Then again, volunteers... It's not like I picked up the work as an f-droid user either
It says for servers that 13-21 years is the break even for emissions from production vs consumption.
The 25 year number is for consumer devices like phones and laptops.
I would also argue that average load on the servers comes into play.
I'd still ask folks to donate. £80k isn't much at all given the time and effort I've seen their volunteers spend on keeping the lights on.
From what I recall, they do want to modernize their build infrastructure, but it is as big as an investment they can make. If they had enough in their "coffers", I'm sure they'd feel more confident about it.
It isn't like they don't have any other things to fix or address.
For example, here's a recent NLNet sponsorship that helped reproducible builds ship (an effort that began in 2023): https://f-droid.org/en/2025/05/21/making-reproducible-builds...
Their 2025 year in review: https://f-droid.org/en/2025/01/21/a-look-back-at-2024-f-droi...
$3k pays for a 1U server with a 32 core 2.6GHz Epyc 7513 with 128GB RAM and 960GB of non-redundant SSD storage (probably fine for build servers).
All using server CPUs, since that was easier to find. If you want more cores or more than 3GHz things get considerably more expensive.
Not that they are bad and would not be way better than what they have, just that I though the parent was quite the optimist with his Zen4/Zen5 pricing.
Then there's also the overhead of setting up and maintaining the hardware in their location. It's not just a "solve this problem for ~$2,000 and be done with it".
I don't know the actual specs or requirements. Maybe 1 build server is sufficient, but from what I know there's nearly 4,000 apps on FDroid. 1 server might be swamped handling that much overhead in a timely manner.
Space in your basement or the colo rack of a datacenter along with power, data and cooling is an expense on top. But whatever old servers they have are going to take up more space and use more power and cooling. Upgrading servers that are 5+ years old frequently pays for itself because of the reduced operating costs (unless you opt for more processing power at equal operating cost instead)
See https://lkml.org/lkml/2025/4/25/409
RHEL 8 is still supported and Ubuntu is still baseline x86_64 I believe for commercial distros. Not sure about SuSE.
Deprecated for Debian
https://www.debian.org/releases/stable/release-notes/issues....
> Deprecated for Debian
> https://www.debian.org/releases/stable/release-notes/issues....
32 bit Linux is still supported by the kernel... and... 'Debian, Arch, and Fedora still supports baseline x86_64'.
Please do not take things out of context.
I would also like to know this.
Although I'm a little surprised to learn that the binary itself doesn't have enough information in its header to be able to declare that it needs SSSE3 to be executed; that feels like something that should be statically-analyzed-and-cached to avoid a lot of debugging headaches.
hobbyst dev? sure
Google? nope
Googlers aren't gods. It's a 100,000-person company; they're as vulnerable to "We didn't really think of that one way or the other" as anyone else.
ETA: It's actually not even Google code that changed (directly); Gradle apparently began requiring SSSE3 (https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153) and Google's toolchain just consumed the new constraint from its upstream.
Here, I'm not surprised at all; Google is not the kind of firm that keeps a test-lab of older hardware for every application they ship, so (particularly for their dev tooling) "It worked on my machine" is probably ship-worthy. I bet they don't even have an explicit architecture target for the Android build toolchain beyond the company's default (which is generally "The two most recent versions" of whatever we're talking about).
Does anyone know of plans to resolve this? Will FDroid update their servers? Are google looking into rolling back the requirement? (this last one sounds unlikely)
To, me, that's the worrying part.
Not that it's ran by volunteers. But that all there's left between a full-on "tech monopoly" or hegemony, and a free internet, is small bands of underfunded volunteers.
Opposition to market dominance and monopolies by multibillion multinationals shouldn't just come from a few volunteers. If that's the case, just roll over and give up; the cause is lost. (As I've done, hence my defaitism)
Aside from that: it being "a volunteer ran community" shouldn't be put as an excuse for why it's in trouble/has poor UX/is hard to use/is behind/etc. It should be a killer feature. Something that makes it more resilient/better attuned/easier/earlier adopting/etc.
The EU is already home to many OS contributors and companies. I like the Red Hat approach where you are profitable, but with open source solutions. It's great for governments because you get support, but it's much easier to compete, which reduces prices.
Smaller companies also give more of their money to open source. Bigger companies can always fork it and develop it internally and can therefore pressure devs to do work for less. Smaller companies have to rely on the projects to keep going and doing it all in house would be way too expensive for most.
The Red Hat that was bought by IBM?
I agree with your goals, but the devil is in the methods. If we want governments to support open source, the appropriate method is probably a legislative requirement for an open source license + a requirement to fund the developer.
Always has been.
hogwash
It's just I think that FDroid is an important project, and hope this doesn't block their progress.
Definitely, SSE4.1 instruction set based CPU, for building apps in 2025, No way!!
Appologies if I came across like that, here's what I'm trying to convey:
- Fdroid is important
- This sounds like a problem, not necessarily one that's any fault of fdroid
- Does anyone know of a plan to fix the issue?
For what it's worth, I do donate on a monthly basis to fdroid through liberapay, but I don't think that's really relevant here?
Server hardware at the minimum v2 functionality can be found for a few hundred dollars.
A competent administrator with physical access could solve this quickly.
Take a ReaR image, then restore it on the new platform.
Where are the physical servers?
The minimum is now eight cores on a die for both AMD and Intel, so running a quad core system means staying on 14nm. You may loudly criticize holding back on a quad core system, but you aren't paying $47,500 per core to license Oracle Enterprise database.
The eight core minimum is a huge detriment for commercial software that is licensed by core.
This, and this alone, shatters your argument. Any other questions?
Here's also a recent Xeon quad core [1]
Beside that, could you please show me where the F-Droid build server uses an Oracle Database?
[0] https://www.amd.com/en/products/processors/server/epyc/4004-... [1] https://www.intel.de/content/www/de/de/products/sku/236193/i...
For any software licensed by core count, modern systems are usually at a disadvantage.
Next question please.
Not even sure it's in the top 10
Low quality software tends to be popular among the general public because they're very bad at evaluating software quality.
Edit: searching online found this if anyone else is interested https://www.androidauthority.com/best-app-stores-936652/
And Oppo and Vivo too?
In both instances one company owns the other - why have competing app stores?
That's almost certainly not true.
Samsung Galaxy Store is much much bigger.
That's apparently what they did last time. From the ticket:
"Back in 2021 developers complained that AAPT2 from Gradle Plugin 4.1.0 was throwing errors while the older 4.0.2 worked fine. \n The issue was that 4.1.0 wanted a CPU which supports SSSE3 and on the older CPUs it would fail. \n This was fixed for Gradle Plugin 4.2.0-rc01 / Gradle 7.0.0 alpha 9"
> Our machines run older server grade CPUs, that indeed do not support the newer SSE4_1 and SSSE3.[0]
I.e. the problem is because fdroid have older CPUs, newer ones would be able to build. I only mentioned it in terms of what the plans to fix might be. I have zero idea if upgrading servers is the best way to go.
[0] https://issuetracker.google.com/issues/438515318?pli=1
https://android.googlesource.com/platform/frameworks/base/+/...
Binaries everywhere. Tried to rebuild some of them with the available sources and noped the f out because that breaks the build so bad it's ridiculous.
So much for "Open Source"
Also, you don't need to compile all of AOSP just to get the toolchain binaries.
If the code was written reasonably you can usually find enough clues to figure out where to start decoding and thus get a reasonable assembly output, but even then you often need to restart the decoding several times because the decoder can get confused at function boundaries depending on what other data gets embedded and where it is embedded. Be glad self modifying code was going out of style in the 1980's and is mostly a memory today as that will kill any disassembly attempts. All the other tricks that Mel used (https://en.wikipedia.org/wiki/The_Story_of_Mel) also make your attempts at lifting machine code to assembly impossible.
https://youtu.be/eunYrrcxXfw
Even my last, crazy long in the tooth, desktop supported this and it lived to almost 10 years old before being replaced.
However at the same time, not even offering a fallback path in non-assembly?
There's probably not any hand-written assembly at issue here, just a compiler told to target x86_64-v2. Among others, RHEL 9 and derivatives were built with such options. (RHEL 10 bumped up the minimum spec again to x86_64-v3, allowing use of AVX.)
[0] https://en.wikipedia.org/wiki/AMD_10h
You could buy a newer one but I guess they have other stuff they have to pay for.
Wow, i just got into newpipe/fdroid. Its neat to think even a donation the size of mine can be almost individually meaningful :)
If you want to build buildroot or openwrt, the first thing it will do is compiling your own toolchain (rather than reusing the one from your distro) so that it can lead to predictable results. I would have the same rationale for f-droid : why not compile the whole toolchain from source rather than using a binary gradle/aapt2 that uses unsupported instructions?
https://gitlab.com/fdroid/admin/-/issues/593#note_2681207153
Not sure how long it will take to get resolved but that thread seems reassuring even if there isn't a direct source that it was fixed.
In the thread you linked to people are confusing a typo correction ("mas fixed" => "was fixed") as a claim about this new issue being fixed.
The one that was fixed is this similar old issue from years ago: https://issuetracker.google.com/issues/172048751
Does anyone know the numbers of build servers and the specs?
There are 8,760 hours in a non-leap year. Electricity in the U.S. averages 12.53 cents per kilowatt hour[1]. A really power-hungry CPU running full-bore at 500 W for a year would thus use about $550 of electricity. Even if power consumption dropped by half, that’s only about 10% of the cost of a new computer, so the payoff date of an upgrade is ten years in the future (ignoring the cost of performing the upgrade, which is non-negligible — as is the risk).
And of course buying a new computer is a capital expense, while paying for electricity is an operating expense.
1: https://www.eia.gov/electricity/monthly/epm_table_grapher.ph...
However the AMD CPUs did not implement it until Bulldozer, in mid 2011.
While they lacked the many additional instructions provided by Bulldozer, also including AVX and FMA, for many applications the older Opteron CPUs were significantly faster than the Bulldozer-based CPUs, so there were few incentives for upgrading them, before the launch of AMD Epyc in mid 2017.
SSE 4.1 is a cut point in supporting old CPUs for many software packages, because older CPUs have a very high overhead for divergent computations (e.g. with if ... else ...) inside loops that are parallelized with SIMD instructions.
I was hit by this scenario in the 2000s with an old desktop pc I had, also in the 10ys range, I was using just for boring stuff and random browsing, which was old, but perfectly adequate for the purpose. With time programs got rebuilt with some version of SSE it didn't support. When even firefox switched to the new instruction set, I had to essentially trash a perfectly working desktop pc as it became useless for the purpose.
It's amazing how long of a run top end hardware from ~2011 has had (just missed the cutoff by a few months). It's taken this long for stuff to really require these features.
F-Droid admin issue: https://gitlab.com/fdroid/admin/-/issues/593
Catima example: https://github.com/CatimaLoyalty/Android/issues/2608
MBCompass case: https://github.com/CompassMB/MBCompass/issues/88
> But this is like everything with F-Droid: everything always falls on a deaf man's ears. So I would rather not waste more time talking to a brick wall. If I had the feeling it was possible to improve F-Droid by raising issues and trying to discuss how to solve them I wouldn't have left the project out of frustration after years of putting so much time and energy into it.
Everyone else then tries to work around him and through a mixture of emotional appealing, downplaying the importance of certain patches and doing everything in very tiny steps then try to improve things. It's an extremely mentally draining process that's prone to burnout on the part of the contributors, which eventually boils over and then some people quit... which might start a conversation on why nobody wants to contribute to the FOSS project. That conversation inevitably goes nowhere because the people you'd want to hold that conversation with are so fed up with how bad things have gotten that they'd rather just see the person causing trouble removed entirely. (Which may be the correct course of action, but this is an argument often given without putting forward a proper replacement/considering how the project might move forward without them. Some larger organizations can handle the removal of a core maintainer, most can't.) Rinse and repeat that cycle every five years or so.
F-Droid isn't at all unique in this regard, and most people are willing to ignore it "because it's free, you shouldn't have any expectations". Any long running FOSS project that has significant infrastructure behind it will at some point have this issue and most haven't had a great history at handling it, since the bus factor of a lot of major FOSS projects is still pretty much one point five people. (As in, one actual maintainer and one guy that knows what levers to pull to seize control if the maintainer actually gets hit by a bus, with the warning that they stop being 0.5 of a bus factor and become 0 if they do that while the maintainer is still around.)
[0]: Basically the inverse of https://xkcd.com/1172/
Then again who is to say that I would be a better custodian than this guy?
Given F-Droid's emphasis on isolating and protecting their build environment, I'm kind of surprised that they're just using upstream binaries and not building from source.
https://forum.f-droid.org/t/call-for-help-making-free-softwa...
> For all those following this, we have the budget to buy new hardware, what we lack is a skilled hoster who has committed to physically hosting a new bare metal box for us.
When I read this, pop culture has trained me to expect an insult, like: “Their servers are so old, they sat next to Ben Franklin in kindergarten.”
I don't know why they have enabled modern CPU flags for a simple intermediary tool that compiles the apk resources files, it was so unneccesary
Welp there goes my plans on savaging an old laptop to build my android apps.
https://android.googlesource.com/platform/frameworks/base/+/...
If I had the time, I'd try to compile a binary of it that will run on Win95 just to give my fuckings to the planned obsolescence crowd.
No one would write Android apps on a Chromebook, and making it harder to do so would only reduce the incentive for companies to develop Android apps.
How could Google benefit from pushing a newer instruction set standard on Windows and macOS?
If they required a Google-specific Linux distro to build this thing or if they went the Apple route and added closed-source components to the build system, this could be seen as a move to mess with the competition, but this is simply a developer assuming that most people compiling apps have a CPU that was produced less than 15 years ago (and that the rest can just recompile the toolchain themselves if they like running old hardware).
With Red Hat and Oracle moving to SSE4.1 by default, the F-Droid people will run into more and more issues if they don't upgrade their old hardware.
This happened because nobody gives a shit about F-Droid, not because it's somehow a "threat" with unmaintained apps.
It seems you're suggesting a very specific, targeted attack.
Yes, just like it happened with Firefox: https://news.ycombinator.com/item?id=38926156
Several years ago I glumly opined internally that Firefox had two grim choices: abandon Gecko for Chromium, or give up any hope of being a meaningful player in the market. I am well aware that many folks (especially here) would consider the first of those choices worse than the second. It's moot now, because they chose the second, and Firefox has indeed ceased to be meaningful in the market. They may cease to exist entirely in the next five years.
I am genuinely unhappy about this. I was hired at Google specifically to work on Firefox. I was always, and still remain, a fan of Firefox. But all things pass. Chrome too will cease to exist some day.
> suspicions were plausible but incorrect
The suspicions were not about the evil will of the engineers. It's the will of Google itself (or managers, if you want), which plays the main role here. This is exactly what causes the following:
> engineering teams continued to be resource-constrained
It reminds me a bit of Boeing: https://news.ycombinator.com/item?id=19914838
Despite its size, Google does shoestring engineering of most things, which is why so much is deprecated over time -- there's never budget for maintenance.
So I mean in some sense yes, there's valid criticism of Google's "will" here, but that will was largely unaware of Firefox, and the consequences burned Google products and customers just as much or more in the long run. Nightingale looked past individual instances to see a pattern, but didn't continue to scale the pattern up to first-party products as well.
The idea that not supporting a 20+ year old system is "planned obsolescence" is a bit shallow
And so won't anyone else who has time to complain about planned obsolescence, and that includes myself.
If you want "very different" then look at the record-based filesystems used in mainframes.
Aapt2 is an x86_64 standalone binary used to build android APKs for various CPU targets
Previous versions of it used a simpler instruction set, but the new version requires an extra SIMD instruction SSE4. A lot of CPUs after 2008 support this, but not F-droid's current server farm?
So a bit of both of older hardware, and not-matched-with-consumer-featureset hardware. I'd imagine some server hardware vendors supported SSE4 way earlier than most, and some probably supported it way later than most too.
Using AMD hardware that's "only" 13 years old can also cause this problem, though.
https://github.com/cygnusx-1-org/Discoverium/
Yup. That's a huge, huge issue - IME especially once Java enters the scene. Developers have all sorts of weird stuff in their global ~/.m2/settings.xml that they set up a decade ago and probably don't even think about... real fun when they hand over the project to someone else.
Edit: or perhaps you mean that isn’t the only way to provide such guarantees, which is the implication I got reading your other replies.
I upped my (small) monthly contribution. Hope more people contribute, and also work to build public support.
Also, for developers .. please include old fashioned credit cards as a payment method. I'd like to contribute but don't want to sign up for yet another payment method.
Hardly any different from what was in the genesis of .NET.
Nowadays they support up to Java 17 LTS, a subset only as usual, mostly because Android was being left behind accessing the Java ecosystem on Maven central.
And even though now ART is updatable via PlayStore, all the way down to Android 12, they see no need to move beyond Java 17 subset, until most likely they start again missing on key libraries that decided to adopt newer features.
Also stuff like Panama, Loom, Vector, Valhala (if ever), don't count them ever being supported on ART.
At least, they managed to push into mainstream the closest idea of OSes like Oberon, Inferno, Java OS and co, where regardless of what think about the superiotity of UNIX clones, here they have to contend themselves with a managed userspace, something that Microsoft failed at with Longhorn, Singularity and Midori due to their internal politics.
This is pretty sad IMHO, as Java17 was a true turning point. Java21 is icing and Java25 is an incredible refinement with some fascinating new features that are really well thought out.
Are Google buying Jetbrains?
Also Kotlin Foundation is mostly JetBrains and Google employees.
Modern Android has virtual machines on devices with supported hardware+bootloader+kernels: https://source.android.com/docs/core/virtualization
And yes, you could get that cost down easily.
(A server that old might not have any SSDs, which would be insane for a software build server unless it was doing everything in RAM.)
I still maintain old servers, and even my Amiga server has an SSD.
We don't know for sure the servers don't have SSDs, but we do know that back in the days of server hardware that didn't support SSE4.1, SSDs had not yet displaced hard drives for mainstream storage, so it's likely that servers that old didn't originally ship with SSDs. It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
A server at that age is also going to be harder to repair when something dies, and it's due for something to die. If they lose a PSU it might be cheaper to replace the whole system with something a bit less old. Other components they'd have to rely on replacing with something used, from a different manufacturer than the original, or use a newer generation component and hope it's backwards compatible. Hence why I said using hardware that old would imply their infrastructure is fragile.
But all of this is still just speculation because nobody involved with F-Droid has actually explained what specific hardware they're using, or why. So I'm still not convinced that the possibility of a misconfigured hypervisor has been ruled out.
You lost me there. One thing has nothing to do with the other.
People have reasons for running the hardware they run. Do you know their reasons? If you do, please share. If not, there's no connection whatsoever between old hardware and unmaintained infrastructure.
Is my AlphaServer DS25 unmaintained? It's very old server hardware.
Is my 1981 Chevette unmantained? It's REALLY old. Can you infer that the fact that I have a car from 1981 means it's unmaintained? I'd say that reasonable people can infer that it's definitely maintained, since it would most likely not still be running if it weren't.
> It's not impossible to retrofit such a server with SSDs, but doing that without upgrading to a more recent platform would be a weird choice.
I don't know where you learned about servers, but no, it's not a weird choice to use newer storage in older servers. Not at all. Not even a little bit. Maybe you've worked somewhere that bought Dell servers with storage and trashed the servers when storage needing upgrading, but that's definitely not normal.
See, this is just you being unreasonable.
Yes, we can all imagine why people might keep old hardware around. But your AlphaServer is at best your hobby, not production infrastructure that lots of people and other projects rely on. Nobody's noticing whether or not it crashes. Likewise for your Chevette: nobody cares until it stalls out in traffic, then everyone around you will make the reasonable assumption that it's behind on maintenance.
If F-Droid is indeed using ancient hardware, and repeatedly experiencing software failures as a result, then the most likely explanation is that their infrastructure is inadequately maintained. Sure, it's not a guarantee, it's not the only possibility, but it's a reasonable assumption to work with until such time as someone from F-Droid explains what the hell is going on over there. And if there's nobody available to explain what their infrastructure is and why it is showing symptoms of being old and unmaintained, that's more evidence for this hypothesis.
[0] https://www.ebay.com/str/evolutionecycling
the point of my post still stands
Intel ME is not a feature for user, it is intended to control any modern CPU except the ones coming to US Army/Navy. It is needed to make Stuxnet-class attacks. The latest chip with possibiliy to have the ME provenly disabled is the 3rd gen.
Many ARM vendors sell powerful arm computers without any ME-analog on board.
> It is needed to make Stuxnet-class attacks.
I have issues with the presence of the ME and I think we agree on a lot of things, but this statement is lunacy lol
Although they do have build servers for the purpose of confirming upstream APKs match the source code using reproducible builds, but those are separate processes that don't block each other (unlike F-Droid's rather monolithic structure).
IzzyOnDroid has been faster with updates than F-Droid for years, releasing app updates within 24 hours for most cases.
> For all those following this, we have the budget to buy new hardware, what we lack is a skilled hoster who has committed to physically hosting a new bare metal box for us.
Genuinely hate that, as Mozilla has deprived me from Firefox's translation feature because of that.
I'm sure there's someone out there who believe their 8086 is still "good enough", so should we restrict all software to the features supported by an 8086: 16-bit computations only, 1 MB of memory, no multithreading, no SIMD, no floats, no isolation between OS and user processes? That would obviously be ludicrous.
At a certain point it just doesn't make any sense to support hardware that old anymore. When it is cheaper to upgrade than to keep running the old stuff, and only a handful of people are sticking with the ancient hardware for nostalgic reasons, should that tiny group really be holding back basically your entire user base?
"A tiny group is holding back everyone" is another silly strawman argument - all decent packaging/installation systems support providing different binaries for different architectures. It's just a matter of compiling just another binary and putting it into a package. Nobody is being hold back by anyone, you just can't make a more silly argument than that...
Your suggestion falls flat on its face when you look at software where performance REALLY matters: ffmpeg. Guess what? It'll use SIMD, but can compile and run just fine without.
I don't understand people who make things up when it comes to telling others why something shouldn't be done. What's it to you?
https://wiki.debian.org/InstructionSelection
There's some guidelines[1][2] for developers to follow for a reasonable set of features, where they only need to manage ~4 variants. In this proposal the lowest set of features include SSE4.1, which is basically includes nearly any x86_64 CPU from the past 15 years. In theory we could use a modern CPU to compile the 4 variants and ship them all in a FatELF, so we only need to distribute one set of binaries. This of course would be completely impractical if we had to support every possible CPU's distinct features, and the binaries would be huge.
[1]:https://lists.llvm.org/pipermail/llvm-dev/2020-July/143289.h...
[2]:https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...
This is horribly inaccurate. You can compile software for 20 year old CPUs and run that software on a modern CPU. You can run that software inside of qemu.
FYI, there are plenty of methods of selecting code at run time, too.
If we take what you're saying at face value, then we should give up on portable software, because nobody can possibly test code on all those non-x86 and/or non-modern processors. A bit ridiculous, don't you think?
That's testing it on the new CPU, not the old one.
> You can run that software inside of qemu.
Sure you can. Go ahead. Why should the maintainer be expected to do that?
> A bit ridiculous, don't you think?
Not at all. It's ridiculous to expect a software developer to give any significance to compatibility with obsolete platforms. I'm not saying we shouldn't try. x86 has good backward compatibility. If it still works, that's good.
But if I implement an algorithm in AVX2, should I also be expected to implement a slower version of the same algorithm using SSE3 so that a 20 year old machine can run my software?
You can always run an old version of the software, and you can always do the work yourself to backport it. It's not my job as a software developer to be concerned about ancient hardware unless someone pays me specifically for that.
Would you expect Microsoft to ship Windows 12 with baseline compatibility? I don't know if it is, but I'm pretty certain that if you tried running it on a 2005 CPU, it would be pretty much non-functional, as performance would be dire. I doubt it is anyway due to UEFI requirements which wouldn't be present on a machine running such CPU.
There's the issue. You think that Windows is normal and an example of stuff that's relevant to open source software.
If people write AVX-512 and don't want to target anything else, then fine. But then it's simply not portable software.
Software that's supposed to be portable should be, you know, portable.
The implication is that you can decide to not support 20 year old CPUs and still have portable software. People who think that are just ignorant because if software is portable, it'll work on 20 year old CPUs. The "20 year old CPUs" part is a red herring, since it has nothing to do with anything aside from the fact that portable software will also run on 20 year old CPUs as well as different CPUs.
As an aside, instead of making up excuses for bad programmers, you might be interested to learn that software compiled with optimizations for newer amd64 didn't show any significant improvement over software compiled for all amd64.
Also, you have things backwards: code written and compiled today and run on 2005 CPUs wouldn't be "pretty much non-functional, as performance would be dire" unless you're talking about Windows. This is a problem with programmers and with Windows, and targetting the newest "features" of amd64 doesn't fix that. Those things aren't even related.
It's interesting how many people who either don't understand programming or who intentionally pretend not to want to make excuses for software like Windows.
It's not that hard to use gentoo.
It's fine to ship binaries with hard-coded cpu flag requirements if you control the universe, but otherwise not, especially if you are in an ecosystem where you make it hard for users to rebuild everything from source.
/s (should be obvious but probably not for this audience)
https://wiki.debian.org/InstructionSelection
Guess what the company behind Android wants to do...
This means their build infrastructure burns excessive amounts of power being run by volunteers in basements/homelabs on vintage museum grade (15 year old Opterons/Phenoms) hardware.
Gamers have been there 14 years ago with 'No Man's Sky' being the first big game requiring SSE 4.1 for no particular reason.
For example, if they published their exact setup for building Android apps so others could replicate it
How many Android users compile the own apps they use
Perhaps increasing that number would be a goal worth pursuing
Seems like he is talking about the developer being responsible for that also!
Also "not necessarily your fault" means "probably not your fault", the opposite of "your fault"
https://wiki.debian.org/InstructionSelection
That's a problem that people are trying to solve by not using an ancient CPU baseline. Do you have a reasonable proposal for how else we should enable widespread use of hardware functionality that's less than two decades old?
On Hetnzer (not affiliated), at this moment, i7-8700 (AVX2 supported) with 128 GB RAM, 2x1 TB SSD and 1 Gbit uplink costs 42.48 eur per month, VAT included, in their server auction section.
What are we missing here, besides that build farm was left to decay?
In the former case, I fail to see how ME or its absence is relevant to building Android apps, which they do using Google-provided binaries that have even more opportunity to inject naughty bits into the software. In the latter case, I better forget they exist.
> they are just "it used to work perfectly" guys.
To do a supply-chain attack on Google's SDK would be much more expensive and less likely to succeed. Google isn't going to be the attacker.
The recent attack on AMI/Gigabyte's ME shows how a zero-day can bootkit a UEFI server quite easily.
There are newer Coreboot boards than Opteron, though. Some embedded-oriented BIOS'es let you fuse out the ME. You are warned this is permanent and irreversible.
F-Droid likely has upgrade options even in the all-open scenario.
Upgrading the build farm CPUs seems like the obvious fix, but I’m guessing funding and coordination make it less straightforward. In the meantime, forcing devs to downgrade AGP or strip baseline profiles just to ship feels like a pretty big friction point.
Long term, I wonder if F-Droid could offer an optional “modern build lane” with newer hardware, even if it means fewer guarantees of full reproducibility at first. That might at least keep apps from stalling out entirely.
Very intelligent move from Google. Now you can't compile "Hello World" without SSE4.1, SSSE3. /s
Are there any X86 tablets with Android ?
You need a middleman in place in case the app developer goes bad.
https://apps.obtainium.imranr.dev/
They put the disclaimer on top that this list is not meant as an app store or catalog. It's meant for apps with somewhat complex requirements for adding to Obtainium. But it serves well as a catalog since most of the major open source apps are listed.
Our Dutch news (and I think most EU news) is pretty much presenting us with the view that Israel has lost it (stories about young men searching for food being shot in the genitals for fun and such [0]), so I'm very curious how their government presents things to its civilians.
[0] https://nos.nl/nieuwsuur/artikel/2575933-beschietingen-bij-z...
There are also fully English sources like Times of Israel, though though it has sort of an international audience, not only Israelis.
For example, whether or not you're aware that the banking system is collecting interest on all the money in the world, every second of the day, and it created it all out of thin air.
Believe at what? A fact that is being actively documented in Gaza by NGOs and corroborated by numerous news agencies internationally?
This is all comming across as dishonest (specially when looking at your own homepage)
But in any case, this is false dichotomy, and likely exaggerated one to begin with.
The tools in question in OP should be easy to build from source and not rely on the host's architecture, to be usable on platforms like ARM and RISCV. It's clear that in the android ecosystem, people don't care, so F-Droid can't do miracles (the java/gradle ecosystem is just really bad at this), but this would not happen if the build tools had proper build recipes themselves.
Yup, same here! The story is as old as time, and the examples are plentiful. First Slashdot, then Reddit, then now GitHub, all became far-far-far slower and less usable, once they've been "improved" by the folk engaging in the resume-driven development:
Why is GitHub UI getting slower? - https://news.ycombinator.com/item?id=44799861 - Aug 2025 (115 comments)
I am, too, as a user, quite pleased that F-Droid is keeping it cool and reliable for the actual users.
What an entitled conclusion.