Hands down one of the greatest services out there, stopped a racket and made the internet secure.
I remember a time when having an HTTPS connection was for "serious" projects only because the cost of the certificate was much higher than the domain. You go commando and if it sticks then you purchase a certificate for a 100 bucks or something.
dachris 3 days ago [-]
There's still enough people out there who don't know better, manually (or auto-renew) purchasing new a certificate every year from their hosting provider like it's 2013.
karel-3d 3 days ago [-]
I have dealt with banking environment when they required SSL with at least 1-year validity on the callback API URL. Which excluded Let's Encrypt.
We were looking for a SSL provider that had > 1 year old certs AND supported ACME... for some reason we ended up with SSL.com that did support ACME for longer lasting certs; however, there was some minor incompatibilities in how kubernetes cert-manager implemented ACME and how SSL.com implemented ACME; we ended up debugging SSL.com ACME protocol implementation.
Fun. We should have just clicked once per 3 years, better than debugging third parties APIs.
No, I don't remember the details and they are all lost in my old work emails.
(Nowadays I think zerossl.com also supports ACME for >1 year certs? but they did not back then. edit: no they still don't, it's just SSL.com I think)
lol768 3 days ago [-]
> I have dealt with banking environment when they required SSL with at least 1-year validity on the callback API URL
Why are (some) banks always completely clueless about these things? Validating ownership of the domain more often (and with an entirely automated provisioning set-up that has no human weak links) can only be a good thing.
Perhaps the banking sector will finally enter the 21st century in another ten years?
karel-3d 3 days ago [-]
The banking sector usually goes with "checkbox security".
They have these really, really long lists what all needs to be secured and how. Some of it is reasonable, some of it is bonkers, there is way too much of that stuff, and it overall increases the price of any solution 10x at least.
But OTOH I can hardly blame them, failures can be catastrophic there, as they deal with real money directly and can be held liable for failures. So they don't really care about security, and more about covering their asses.
dspillett 3 days ago [-]
> some of it is bonkers
Some of it is truly bonkers and never was good practise, but much of the irritating stuff is simply out-of-date advice. The banks tend to be very slow to change unless something happens that affects (or directly threatens to affect) the bottom line, or puts them in the news unfavourably.
Of course some of it is bonkers, like HSBC and FirstDirect changing the auth for my personal accounts from “up to 9 case-sensitive alpha-numeric characters” (already considered bad practise for some years) to “6 digits”, and assuring me that this is just as secure as before…
dizhn 3 days ago [-]
That sounds like "we were truncating your id and pass before anyway".
dspillett 3 days ago [-]
I don't think so, because it would also imply they were also throwing away anything non-numeric, and I really hope nothing that stupid was going on. When the change happened everyone had to establish a new password.
I read it as “we have been asked to integrate an ancient system that we can't update (or more honestly in many cases: can't get the higher-ups to agree to pay to update), so are bringing out other systems down to the lowest common denominator”. That sort of thing happens too often when two organisations (or departments within one) that have different procedures, merge or otherwise start sharing resources they didn't previously.
BrandoElFollito 3 days ago [-]
I won't go into the idiocies banks implement. They usually have to because totally incompetent people tell them they have to.
One of the practices was pathetic to the point of being funny: you had to input specific characters of your password (2nd, 4th, 6th, etc - this was changing at each login) AND there was a short timeout. My children probably learned a few new words when I was logging in.
jjeaff 2 days ago [-]
I suppose the purpose of that was to signal to hackers that they ARE in fact storing all passwords in plaintext?
BrandoElFollito 2 days ago [-]
It is very likely, yes. After a year or so of this practice (people were writing down their password with digits under the letters to quickly match the request) the bank said that they now propose two "secure login forms" - the older one and a new, normal one.
Some time later they silently removed the first one.
Apfel 2 days ago [-]
Tsb bank in the UK still do exactly this
Veen 3 days ago [-]
The problem is more likely one of regulation than technical knowledge. Banks hire very smart people who know that a lot of what they do is bullshit, but they're paid to comply with banking and security regulations that lag a long way behind technical advances. Banks are also inherently conservative in their technical choices, and for good reason.
JoshTriplett 3 days ago [-]
> I have dealt with banking environment when they required SSL with at least 1-year validity on the callback API URL. Which excluded Let's Encrypt.
I wonder if this would be an opportunity for revenue for Let's Encrypt? "We do 90-day automated-renewal certificates for free for everyone. If you're in an unusual environment where you need certificates with longer validity, we offer paid services you can use."
karel-3d 3 days ago [-]
If they want to do something commercial, they should go for the code signing certificates, that stuff is still a racket.
account42 3 days ago [-]
Probably better to keep LE / ISRG completely non-profit. Adding a profit motive has too big of a chance to end with actually security-relevant features being gated behind payment eventually.
JoshTriplett 3 days ago [-]
It's less about the profit motive, and more about removing the remaining incentives to stay outside the ACME ecosystem. The funding would be to provide additional infrastructure (e.g. revocation servers for longer-lasting certificates), and to fund new such efforts.
account42 3 days ago [-]
But once there is an income stream from issuing certificates there is an incentive to increase it which will quickly find itself at odds with the primary missions of providing secure connections to as many people as possible. Making infrastructure depend on that income stream only increases that incentive. Perhaps you trust the ISRG to resist the temptaton but as far as I know they are run by humans.
JoshTriplett 3 days ago [-]
There are many, many opportunities in both the business and non-profit world to make more money by screwing your customers/users, and despite that, it does not always happen. Businesses and non-profits are built on the trust of users (or built in spite of the utter lack of it, e.g. Comcast). I don't think they should be afraid to provide things users need. It is, in fact, possible to choose and keep choosing to maintain the trust of your users.
I think there's still incentive alignment here. Getting people moved from the "purchase 1 year certificate" world (which is apparently still required in some financial contexts) into the ACME-based world provides a path for making a regulatory argument that it'd be easy for such entities to switch over to shorter-lived certificates because the ACME infrastructure is right there.
Nekit1234007 3 days ago [-]
I'm pretty sure ISRG doesn't want to deal with payments any more than they do now (i.e. outside of donations and sponsorships)
merb 3 days ago [-]
Globalsign and digicert have acme support.
karel-3d 2 days ago [-]
Hm, I have no idea why we didn't pick tgem back then. I see it now. Maybe it was too expensive? I don't remember the reasoning at this point
mrtksn 3 days ago [-]
AFAIK there's things like Extended Validation Certificate Verification that used to make the browser address bar look more trustworthy by making it green but I don't know if its still a thing. At least in Safari, I don't see a green padlock anywhere.
mrweasel 3 days ago [-]
I remember our boss really wanted that green bar, so we got an extend validation certificate. What we had failed to realise is that they would only be issued to the actual legal name of your company, but not any other names you may be operating under. We had a B2C webshop, where we wanted the ev-cert, but because the B2C side of the business wasn't it's own legal entity, the cert we go issued was for our B2B name, which none of our customer customers knew and it looked like a scam.
The only good thing dealing with certificate resellers at the time was that they where really flexible in a lot of ways. We got our EV cert refunded, or "store credit" and used the money to buy normal certificates.
bux93 3 days ago [-]
Chrome 77 removed the prominent green EV badge. "A series of academic research in the 2000s studied the EV UI in lab and survey settings, and found that the EV UI was not protecting against phishing attacks as intended. The Chrome Security UX team recently published a study that updated these findings with a large-scale field experiment, as well as a series of survey experiments." [1]
Extended Validation can still play a role in a corporate's IT control framework; the extended validation is essentially a check-of-paperwork that then doesn't need to be performed by your own auditor. Some EV certificates also come with some (probably completely useless) liability insurance.
> Some EV certificates also come with some (probably completely useless) liability insurance.
Warranties / insurance on SSL certificates typically only pay out if a certificate is issued improperly, often in conjunction with other conditions like a financial loss directly resulting from the misissuance. Realistically, any screwup serious enough to result in that warranty paying out would also result in the CA being abruptly removed from browser root certificate programs.
uid65534 3 days ago [-]
Ah yes, I too remember when COMODO was ripped out of browsers in 2011 when it came to light they gave sign-anything rights to a bunch of resellers, one of whom was hacked. And then again in 2016.
And another fun one unrelated to signing was when they tried to trademark "Let's Encrypt" in 2015.
But yes, it is not a common issue and effort would be better focused on improving site security in other ways. (unlike the rest of my comment, this line isn't sarcasm.)
3 days ago [-]
Propelloni 3 days ago [-]
They are still there, but most browsers don't do anything with it anymore since 2019, when Firefox and Chrome stopped caring.
There are some scenarios where you still have to employ EV certificates, e.g. code signing.
Systemmanic 3 days ago [-]
Chrome and Firefox removed the extra UI stuff for EV certs in 2019:
Yeah that also stopped being a thing. I'm really happy how Chrome and then other browsers gradually shifted the blame to insecure websites rather than highlighting "secure" ones.
You'll still find people online clamoring EV certificates are worth anything more than $0 but you can ignore them just as well.
_betty_ 3 days ago [-]
they were also pretty bad for performance due to the extra lookup (and reduction in caching)
account42 3 days ago [-]
What extra lookup. AFAIU they are just like normal certificates but with a "customer paid extra" flag.
they normally require a revocation lookup on the spot, and iirc there was differences in if they could or how stapling worked.
account42 3 days ago [-]
Interesting. Sounds like a cost that is entirely reasonable for use cases like online banking though.
tannhaeuser 3 days ago [-]
Huh? EV certificates are actually certifying you're the (juristical) person you're claiming to be based on ID and trade register checks, unlike Let's Encrypt certificates which only certify you're in possession of a domain. Isn't using EV certificates legally required for e-commerce web sites at least in parts of the world, and also obligatory for rolling out as MasterCard/Visa merchant by their anti-fraud requirements along with vulnerability checks and CI/site update processes being in place?
khuey 3 days ago [-]
> Isn't using EV certificates legally required for e-commerce web sites at least in parts of the world
Not in any jurisdiction I'm aware of, though it's a big world so it wouldn't shock me if some small corner of it has bad laws.
> and also obligatory for rolling out as MasterCard/Visa merchant by their anti-fraud requirements
PCI DSS does not require EV certificates.
irjustin 3 days ago [-]
Related point - we interface with Singapore gov services (MYINFO).
They don't recognize LE nor AWS's certs. Only the big paid ones. Such an annoying process too - to pay, to obtain and update the certs.
tialaramex 3 days ago [-]
I guess the good thing there is that it's absolutely transparent that this is just a way to make you pay somebody else. Like the Jones Act (Merchant Marine Act, but everybody just calls it the Jones Act). The US government doesn't get a slice if you want to buy ships to move stuff from one part of the US to another, but it does require that you buy the ships from an American shipyard, and so those yards needn't be internationally competitive because the US government has their back.
Nobody is like "Oh, the Jones Act ensures high quality ships" because it doesn't, the Jones Act just ensures that you're going to use those US shipyards, no matter what.
vmit 3 days ago [-]
Myinfo did away with certificate requirement altogether! Yay!
(hello from Singapore)
ta1243 3 days ago [-]
Our company bans the use of letsencrypt because of the legal terms. Nobody at the CxO level will sign off on it, so we end up paying whatever to globalsign.
seszett 3 days ago [-]
What legal terms do they find objectionable?
What about ZeroSSL, which is basically interchangeable with Let's Encrypt?
wink 3 days ago [-]
Doesn't necessarily have anything to do with knowing, some environments are just not worth automating or support it so badly that even paying twice or more would still be nothing compared to the annoyance. It's been getting better over the years though.
yread 3 days ago [-]
Unfortunately, the code signing certificates work pretty much the same way
technion 3 days ago [-]
I deal with multiple enterprise applications where idea of scripting a renewal involves playing with scripting headless Chrome.
I'm really not a fan of it but I'm happier paying for a one year cert than doing that
yurishimo 3 days ago [-]
Sorry if this is a dumb question, but why? If I'm not mistaken, Let's Encrypt supports validation via DNS now so you don't even need to have a working webserver to issue a certificate. Automating a script to perform a renewal should be much simpler than headless Chrome!
If your DNS provider doesn't have an API, that seems like a separate issue but one that is well worth your organization's time if you're working in the enterprise!
patrakov 3 days ago [-]
I guess it is not about renewal but about certificate deployment.
blipvert 3 days ago [-]
You can set up the _acme-challenge (or whatever it is)as a CNAME to point to a domain which does support an API for automating the renewal
(looking in to setting this up for a bunch of domains at work)
technion 3 days ago [-]
Obtaining a certificate via dns doesn't help you install it via a Web interface that takes 20+ clicks and a 15 minute reboot to apply .
pastage 3 days ago [-]
And open a ticket on a suppliers website, click through four pages with free text input, then send certificate via email.
Lets not talk about key delivery. We will get back the admin cost and of all that in a year if we tunnel them through one of our LBs.
christophilus 3 days ago [-]
I had a lazily configured proxy which would request a cert for any domain you threw at it. An attacker figured this out and started peppering it with http requests with randomly generated subdomains prefixed. When I discovered it, my first thought wasn’t, “Oh, I hope I didn’t get flagged by Let’s Encrypt.” It was, “Oh, man. I feel really bad that my laziness caused undue load on Let’s Encrypt.”
Let’s Encrypt is the best thing to happen to the web in at least a decade.
BiteCode_dev 3 days ago [-]
Mozilla is getting a lot of criticism, but just for letsencrypt alone they are making the world a better place.
Before them I never used SSL for anything, because the cost/benefit ratio was just not there for my services.
Since then, I never not use it.
qskousen 3 days ago [-]
Two Mozilla employees were involved in starting Let's Encrypt, and the Mozilla foundation is one of the sponsors, but as far as I can tell the foundation was not directly involved in creating it.
gg82 3 days ago [-]
One of the more productive uses of all those millions of dollars that Mozilla received. These days, Let's Encrypt is more important than Mozilla... and would have no difficulty receiving donations to keep the service running. It also shows what a well run technical non-profit looks like!
dewey 3 days ago [-]
At least there were some services where you could get a single domain "real" certificate for free before (But a complicated and annoying process) but then if you wanted a wildcard certificate to cover a bunch of subdomains for personal projects it became really expensive.
Glad this problem just got completely resolved.
dtquad 3 days ago [-]
Google/Chrome and Firefox also deserve credit for making a free and open CA viable.
jaas 3 days ago [-]
We consider our ten year anniversary to be in 2025 but I appreciate the kind words here!
Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:
It's actually serendipitous that it happened exactly on December 2015. That's when I had only enough for a domain, but not for a ssl, and my site needed an ssl. Thanks to let's encrypt free ssl, the project hit.
pests 3 days ago [-]
It feels like just yesterday I was paying for certs, or worst, just running without.
Can't believe its been ten years.
ozim 3 days ago [-]
Can’t believe there are still anti TLS weirdos.
Pannoniae 3 days ago [-]
TLS is not panacea and it's not universally positive. Here are some arguments against it for balance.
TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.
It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.
It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.
ozim 3 days ago [-]
*It also discourages naive experimentation* that's the point where if you put on silly website no one can easily MitM it when its data is sent across the globe and use 0-day in browser on "fluffy kittens page".
Biggest problem that Edward Snowden uncovered was - this stuff was happening and was happening en-mass FULLY AUTOMATED - it wasn't some kid in basement getting MitM on your WiFi after hours of tinkering.
It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
account42 3 days ago [-]
> It also discourages naive experimentation that's the point where if you put on silly website no one can easily MitM it when its data is sent across the globe and use 0-day in browser on "fluffy kittens page".
Transport security doesn't make 0-days any less of a concern.
> It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
That's a societal/legal problem. Trying to solve those with technological means is generally not a good idea.
> There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
The only people who can realistically MITM your connection are network operators and governments. These can and should be held accountable for their interference. You have no more security that your food wansn't tampered with during transport but somehow you live with that. Similarly security of physical mail is 100% legislative construct.
> You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
I don't but my ISP does. Solutions for malicious actors interfering with routing are needed irrespective of transport security.
> So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Not at all - unless you are also epecting them to have their fluffy kitten postcards checked for Anthrax. In general, it is security people who often need to touch grass because the security model they are working with is entirely divorced from reality.
ozim 3 days ago [-]
All I got from your explanation is:
I am going to cross the street in front of that speeding car because driver will be held liable when I get hit and die.
If there is not even a possibility to hijack the traffic whole range of things just won’t happen. And holding someone liable is not the solution.
wizzwizz4 3 days ago [-]
Technological measures don't make things impossible: they make them harder. And they rarely solve all the consequences of a problem: only the ones that have been explicitly identified.
account42 3 days ago [-]
The situation is more akin to demanding that pedestrians should be prevented from crossing the road at all cost because a malicious driver could ignore all red lights. And of course banning pedestrias ins't enough. After all, motorcyles are also pretty unsafe so we ban those too. But you see someone could also be pointing a bazooka at the road so then we require all cars to have sufficient armor plating in order to be allowed on the road. That is, before realizing that portable nukes exists and you never know who has one. We don't do that. Instead we develop specific solutions (e.g. an over/underpass for high risk intersections, walls for highways) where they are actually needed without loosing sight of the unreasonable cost (not just monetary) that demanding zero risk would impose.
lcnPylGDnU4H9OF 3 days ago [-]
> The situation is more akin to demanding that pedestrians should be prevented from crossing the road at all cost because a malicious driver could ignore all red lights.
Only if you are talking about actual events in which this is happening as a matter of course. Because that's what it is when ISPs inject ads into plain-text HTTP traffic: a matter of course. It's a bit more like saying that we don't have a way to effectively enforce our laws against maliciously reckless driving so we install a series of speed bumps on the road (it's still not quite the same thing because it doesn't make the reckless driving impossible but it does increase the cost).
But it's not like we're talking about agreeable activity here, anyway. This particular case against TLS sounds like a case that favors criticizing an imperfect solution to widespread negative behavior over criticizing the negative behavior. It seems reasonable to look at the speed bumps (which one may or may not find distasteful) and curse the reckless behavior of those who incentivized their construction.
ozim 3 days ago [-]
For me TLS is an overpass - yeah it costs more to build it, pedestrians have to climb the stairs to get on the other side but it is worth it. Then hopefully we have Let's Encrypt that can be an elevator/lift so pedestrians don't have to climb the stairs.
But that analogy of course runs dry rather quick because you can look both ways when crossing street - on the internet as I mentioned you cannot control where data flows and bad actors already proven that they are doing so.
This is why it is not like overpass that you can build where the need is - because for internet traffic the need is everywhere.
hehehheh 3 days ago [-]
Counterpounts:
> Transport security doesn't make 0-days any less of a concern.
It does. Each layer of security doesn't eliminate the problem but does make the attack harder.
Mail and food are different in that there are not limitless scalable attacks that can originate anywhere around the globe.
OkayPhysicist 3 days ago [-]
> transport security doesn't make 0-days any less of a concern.
It does make the actual execution of said attacks significantly harder. To actually hit someone's browser, they need to receive your payload. In the naive case, you can stick it on a webserver you control, but how many people are going to randomly visit your website? Most people visit only a handful of domains on a regular visit, and you've got tops a couple days before your exploit is going to be patched.
So you need to get your payload into the responses from those few domains people are actually making requests from. If you can pwn one of them, fantastic. Serve up your 0-day. But those websites are big, and are constantly under attack. That means you're not going to find any low-hanging fruit vulnerability-wise. Your best bet is trying to get one of them to willing serve your payload, maybe in the guise of an ad or something. Tricky, but not impossible.
But before universal https, you have another option: target the delivery chain. If they connect to a network you control? Pwned. If they use a router with bad security defaults that you find a vulnerability in? Pwned. If they use a small municipal ISP that turns out to have skimped on security? Pwned. Hell, you open up a whole attack vector via controlling an intermediate router at the ISP level. That's not to mention targeting DNS servers.
HTTPS dramatically shrinks the attack surface for the mass distribution unwanted payloads down to basically the high-traffic domains and the CA chain. That's a massive reduction.
> The only people who can realistically MITM your connection are network operators and governments.
Literally anyone can be a network operator. It takes minimal hardware. Coffee shop with wifi? Network operator. Dude popping up a wifi hotspot off his phone? Network operator. Sketchy dude in a black hoodie with a raspberry pi bridging the "Starbucks_guest" as "Starbucks Complimentary Wifi"? Network operator. Putting the security of every packet of web traffic onto "network operators" means drastically reducing internet access.
> You have no more security that your food wasn't tampered with during transport but somehow you live with that.
I've yet to hear of a case where some dude in a basement poisoned a CISCO truck without having to even put on pants. Routers get hacked plenty.
HTTPS is an easy, trivial-cost solution that completely eliminates multiple types of threats, several of which are either have major damage to their target or risk mass exposure, or both. Universal HTTPS is like your car beeping at you when you start moving without your seat belt on: kinda annoying when you're doing a small thing in tightly controlled environments, but has an outstanding risk reduction, and can be ignored with a little headache if you really want to.
jjeaff 2 days ago [-]
I especially agree with your point about Cisco trucks (although I think you meant Sysco, an important distinction since we are comparing food supply to networks). The fact is, there are plenty of ways to poison the food supply in our current society. Even ways that might minimize your ability to be discovered. And yet it is rarely tried. But networks are infiltrated all the time. I think partially because networks are accessible from anywhere in the world. No pants (as you said) or passport required.
dspillett 3 days ago [-]
> It is also a very centralised model
I can see why the centralisation is suboptimal (or even actively bad if I'm feeling paranoid!), but other schemes (web of trust, etc.) tend to end up far more complicated for the end user (or their UA). So far no one has come up with a practical alternative without some other disadvantage that would block its general adoption.
> if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
Because if we don't trust those few 3rd parties, we end up having to effectively trust every host on the Internet, which means trusting people and trusting all the people is a bad idea.
Some argue that needing a trusted certificate for just a personal page is extreme, but this one of those cases where the greater good has to win out. For instance: if we train people that self-signed certs are fine to trust in some circumstances, they'll end up clicking OK to trust them in circumstances where they really shouldn't. This can seem a bit nanny-ish, but people are often dumb, or just lazy to the point where it is sometimes indistinguishable from dumb (I'm counting myself here!) so need a bit of nannying. And anyway, if your site doesn't take any input then no browser will (yet) complain about plain HTTP.
> It also discourages naive experimentation
When something could affect security, discouraging naive experimentation on the public network is a good thing IMO. Do those experiments more locally, or at least on hosts you don't expect the public to access.
chaxor 3 days ago [-]
I agree that centralization is bad, and one of the worst parts of HTTPS (the other being that things like ed22519 systems, chacha20, poly1305, sntrup are generally viewed as better modern alternatives to AES, so postquantum system like rosenpass https://github.com/rosenpass/rosenpass are more preferable).
However, I think there is no reason at all that a system that is decentralized is not far _far_ simpler to instantiate for a user (not to mention far more secure and private). Crypto gets a lot of hate on HN, but it seems that it is mostly due to people's dislike of anything dealing with 'currency' systems or financial that touch it. This is a despised opinion here, but I am still actually excited for crypto systems that solve real world problems like TLS certs, DNS, et al.
Iroh seems like a _fantastic_, phenomenal system to showcase this idea. It allows for a very fast decentralized web experience on modern cryptography such as Blake3, QUIC, and so on but doesn't really touch any financial stuff at all. Its simply a good system.
I hope we can slowly move to a system that uses the decntralized consensus algorithms created in the crypto space to remove the trust in (typically big, corporate, and likely backdoored) centralized entities that our system today _requires_ without any alternative.
bmicraft 2 days ago [-]
> It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.
If the website really isn't maintained, then it's only a matter of time until the server is part of a botnet. Setting up LE for a simple site takes half an hour once.
account42 3 days ago [-]
I find the lack of backwards compatibility also concerning - and that is not something can be fixed as deprecation of old SSL/TLS versions and ciphers is intentional.
Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.
TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.
There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).
In my ideal world
a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).
b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.
c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.
d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.
Sesse__ 3 days ago [-]
The handshake doesn't primarily depend on AES; it is typically a Diffie-Hellman variant (which doesn't have any acceleration) that takes time. Anyway, you're hopefully using TLS 1.3 by now, where you can use ChaCha20 instead of AES :-)
ratorx 3 days ago [-]
> if I want to host a website …
The fundamental problem is a question of trust. There’s three ways:
* Well known validation authority (the public TLS model)
* TOFU (the default SSH model)
* Pre-distribute your public keys (the self-signed certificate model)
Are there any alternatives?
If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.
xorcist 3 days ago [-]
> Are there any alternatives?
The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).
There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.
ratorx 3 days ago [-]
How would you validate whether a certificate was signed by a registrar or not?
If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.
xorcist 3 days ago [-]
How do you validate any certificate? You'd have to trust the registrar, presumably like you trust any one CA today. The web browsers do a decent job keeping up to date with this and new top domains aren't added on a daily basis anyway.
Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.
Any realistic migration should probably exist alongside the public CA model for a very long time.
tptacek 3 days ago [-]
A recent thread going into details of why (only a tiny fraction of zones are signed, in North America that count has gone sharply down over recent intervals, and browsers don't support it):
There is web of trust, where you trust people that are trusted by your friends.
There's issues with it, but it is an alternative model, and I could see it being made to work.
ratorx 3 days ago [-]
Ah, I forgot about that and never really considered it because GPG is so annoying to use, but it is fairly reasonable.
I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.
I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.
account42 3 days ago [-]
Providing initial trust via hyperlinks could be interesting.
MrGreenTea 3 days ago [-]
Regarding the stuff you safe guard: what are your reasons for not sharing them somehow to prevent that loss when (not if) your drive fails?
Pannoniae 3 days ago [-]
I mean, I do! The music I have I put on Soulseek, although the more obscure stuff hasn't been downloaded yet. I also have fairly old video game mods - I don't even know where to share them or if anyone would be interested at all.
account42 3 days ago [-]
You could try to upload them to modding sites (preferrably not onces with a longin requirement for downloading) if you don't want to host them yourself. That can be either general modding archives or game-specific community sites - the latter are smaller but more likely to be interested in older mods. Make sure that whatever host you use can be crawled by the internet archive.
Interest is probably going to be low but not zero - I often play games long after they have been released and sometimes intentionally using older versions that are no longer supported by current mods.
Pannoniae 3 days ago [-]
You are entirely right - although I'd have to be careful with uploading it and where because on Steam Workshop, there's assholes who threaten to DMCA you without basis and there are similar problems on other sites too. But I'll look around :)
tomalbrc 3 days ago [-]
The Internet Archive?
michaelt 3 days ago [-]
I am 99% in favour of widespread use of TLS - but the reality is it means the web only works at the whim of the CA/Browser Forum. And some members of the forum are very eager to flex their authority.
If I do everything perfectly, but the CA I used makes some trivial error which, in the case of my certificate, has no real-world security impact? They can send me an e-mail at 6:40 PM telling me they're revoking my certificate at 2:30 PM the next day. Just what you want to find in your inbox when you get in the next day. I hope you weren't into testing, or staged rollouts, or agreeing deployment windows with your users - you'd better YOLO that change into production without any of that.
Even though it wasn't your mistake, and there's no suggestion you shouldn't have the certificate you have.
As far as the CA/B Forum is concerned, safety-critical systems that can't YOLO changes straight into production with minimal testing and only a few hours of notice don't belong on their PKI infrastructure. You'd better jump to it and fix their mistake right now.
account42 3 days ago [-]
I'm probably more critical of TLS in general than you are, but to be fair to LE one of their biggest contributions has been to change certificate updates from a deployment to something that should happen automatically during normal operations. If you have things setup the recommended way your daily certbot/etc run will simply pick up a new certificate and loat it into whatever servers that need it without you having to lift a finger. Of course in practice it doesn't always work out that way.
michaelt 3 days ago [-]
A daily certbot run won't protect you if the CA discovers the problem at 2pm (starting the 24 hour revocation timer) but they only have a fix rolled out by 6pm.
Anyone whose certbot run was between 2pm and 6pm would get their cert revoked the next day at 2pm anyway - even if it was only issued 18 hours ago.
There's also a higher level question: Is this the web we want to be building? One where every site and service has to apply for permission to continue existing every 24 hours? Do we want a web where the barrier to entry for hosting is a round-the-clock ops team, complete with holiday cover? And if you don't have that, you should be using Facebook or Twitter instead?
hehehheh 3 days ago [-]
Hopefully you terminate TLS far away from your app code so rolling that out to prod is a non issue. But I get your point!
dijit 3 days ago [-]
The digital equivalent of a local kebab shop menu does not need encryption.
The lack of understanding from us as technologists for people who would have had a working site and are now forced into either: an oligopoly of site hosting companies, or, for their site to break consistently as TLS standards rotate is one thing that brings me shame about our community.
You can come up with all kinds of reasons to gatekeep website hosting, “they have to update anyway” even when updating means reinstallion of an OS, “its not that hard to rotate” say people with deep knowledge of computers, “just get someone else to do it” say people who have a financial interest in it being that way.
Framing people with legitimate issues as weirdo’s is not as charming as you think it is.
johannes1234321 3 days ago [-]
TLS doesn't just hide the information transmitted, but also ensures the integrity. Thus nobody on the network tinkered with the prices on the menu.
Also the Kebap Shop probably has a form for reservation or ordering, which takes personal information.
True, they are all low risk things, but getting TLS is trivial (since many Webservers etc can do letsencrypt rotation fully automatically) and secure defaults are a good thing.
dijit 3 days ago [-]
There are plenty of websites that were just static pages used for conveying information. Most people who set them up lacked the ability to turn them into forms that connected to anything.
They’ve nearly all been lost to time now though, if a shop has a web-presence it will be through a provider such as “bokabord”, doordash, ubereats (as mentioned), some of whom charge up to 30% of anything booked/ordered via the web.
But, I guess no MITM can manipulate prices… except, by charging…
matrss 3 days ago [-]
> There are plenty of websites that were just static pages used for conveying information.
If you care about the integrity of the conveyed information you need TLS. If you don't, you wouldn't have published a website in the first place.
A while back I've seen a wordpress site for a podcast without https where people also argued it doesn't need it. They had banking information for donations on that site.
Sometimes I wish every party involved in transporting packets on the internet would just mangle all unencrypted http that they see, if only to make a point...
Like, "telnet textfiles.com 80" then "GET / HTTP/1.0", <enter>, "Location: textfile.com" <enter><enter> and you have the page.
What would be the point of making these unencrypted sites disappear?
matrss 3 days ago [-]
textfiles.com says: "TEXTFILES.COM has been online for nearly 25 years with no ads or clickthroughs."
I'd argue that that is a most likely objectively false statement and that the domain owner is in no position to authoritatively answer the question if it has ever served ads in that time. As it is served without TLS any party involved in the transportation of the data can mess with its content and e.g. insert ads. There are a number of reports of ISPs having done exactly that in the past, and some might still do it today. Therefore it is very likely that textfiles.com as shown in someones browser has indeed had ads at some point in time, even if the one controlling the domain didn't insert them.
Textfiles also contains donation links for PayPal and Venmo. That is an attractive target to replace with something else.
And that is precisely the point: without TLS you do not have any authority over what anyone sees when visiting your website. If you don't care about that then fine, my comment about mangling all http traffic was a bit of a hyperbole. But don't be surprised when it happens anyway and donations meant for you go to someone else instead.
eesmith 3 days ago [-]
There is a big difference between "served ads" and "ads inserted downstream."
If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.
If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
I therefore conclude that your interpretation is meaningless.
> "as shown in someones browser"
Which is different than being served by the server, as I believe I have sufficiently demonstrated.
> But don't be surprised when it happens anyway
Jason Scott, who runs that site, will not be surprised.
matrss 3 days ago [-]
> If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.
I agree it is not. That is why I didn't say that the original server served ads, but that the _domain_ served ads. Without TLS you don't have authority over what your domain serves, with TLS you do (well, in the absence of rogue CAs, against which we have a somewhat good system in place).
> If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
This is simply a compromised device.
> If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
This is an ISP giving you instructions to compromise your device.
> If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
No, in this case I am clearly no longer looking at the website, but asking a third-party to convey it to me with whatever changes it makes to it.
> If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
No, archive.org is then serving an ad on their own domain, while simultaneously showing an archived version of your website, the correctness of which I have to trust archive.org for.
> Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
Fair point. I should have said that I additionally expect the client device to be uncompromised, otherwise all odds are off anyway as your examples show. The implicit scenario I was talking about includes an end-user using an uncompromised device and putting your domain into their browsers URL bar or making a direct http connection to your domain in some other way.
eesmith 3 days ago [-]
While both those domains have a specific goal of letting people browse the web as it if were the 1990s, including using 1990s-era web browsers.
They want the historical integrity, which includes the lack of data integrity that you want.
textfiles 8 hours ago [-]
This argument is stupid.
johannes1234321 3 days ago [-]
Instead of using telnet, switch over to an TLS client.
and you can do the same. A simple wrapper, alias or something makes it as nice as telnet.
eesmith 3 days ago [-]
My goal was to demonstrate that it supported http, and did not require TLS.
ndriscoll 3 days ago [-]
I'm pretty sure tons of people have made web pages or sites without caring about the integrity of the conveyed information. Not every website is something important like banking. It doesn't matter if a nefarious actor tweaks the information on a Shining Force II shrine (and even then, only for people who they're able to MITM).
In practice, many pages are also intentionally compromised by their authors (e.g. including malware scripts from Google), and devices are similarly compromised, so end-to-end "integrity" of the page isn't something the device owner even necessarily wants (c.f. privoxy).
account42 3 days ago [-]
What ensures the integrity of conveyed information for physical mail? For flyers? For telephone conversations?
The cryptography community would have you believe that the only solution to getting scammed is encryption. It isn't.
ozim 2 days ago [-]
My post I am typing here can happily go through Russia/China/India and you cannot do anything about it - and bad actors can actually make your traffic to go through them as per BGP hijacking that was happening multiple times.
NSA was installing physical devices at network providers that was scouring through all information - they did not have to have Agent Smith opening envelopes or even looking at them. Keep in mind criminals could do the same as well just pay off some employees at provider and also not all network providers are in countries where law enforcement works - and as mentioned your data can go through any of such network providers.
If I send physical mail I can be sure it is not going through Bangkok unless I specifically send it with destination that requires it to go there.
matrss 3 days ago [-]
> What ensures the integrity of conveyed information for physical mail? For flyers? For telephone conversations?
Nothing, really. But for physical mail the attacks against it don't scale nearly as well: you would need to insert yourself physically into the transportation chain and do physical work to mess with the content. Messing with mail is also taken much more seriously as an offense in many places, while laws are not as strict for network traffic generally.
For telephone conversations, at least until somewhat recently, the fact that synthesizing convincing speech in real time was not really feasible (especially not if you tried to imitate someones speech) ensured some integrity of the conversation. That has changed, though.
dijit 3 days ago [-]
[flagged]
burnished 3 days ago [-]
Huh. Never thought about it that way; replacing hypothetical MITM attacks with genuine middlemen.
account42 3 days ago [-]
The Kebab Shop also takes orders over the phone, which is not any more encrypted.
And prices are more likely to be simply outdated than modified by a malicious entity. Your concerns are not based in reality.
philistine 3 days ago [-]
The fact that content on http websites hasn’t been maliciously switched does not mean that https didn’t work.
It’s like a vaccine. We vaccinated most of the web against a very bad problem, and that has stopped the problem from happening in the first place. If 90% were still on http, way more ISPs would insert ads.
megous 3 days ago [-]
You can get integrity at higher levels in the stack (or lower).
pests 3 days ago [-]
You say that until some foreign national gets their kabab order MITM to deliver them some malicious virus that ends up getting him killed.
account42 3 days ago [-]
Which is of course a real concern for the average joe.
pests 2 days ago [-]
I wish for all my fellow humans to be safe, just because it doesn't protect me personally doesn't mean I think it's not a concern.
gotodengo 3 days ago [-]
Their site will break consistently in any case. Running a site in 2024 comes with a responsibility to update regularly for a good reason.
There are more than enough forgotten kebab shop restaurant pages that are now serving malware because they never updated WordPress that an out of date certificate warning is a very good "heads up, this site hasn't been maintained in 6 years"
If we're talking hosting even a static HTML file without using a site hosting company, that already requires so much technical knowledge (Domain purchasing, DNS, purchasing a static IP from your ISP, server software which again requires vuln updates) that said person will be able to update a TLS cert without any issue.
account42 3 days ago [-]
> There are more than enough forgotten kebab shop restaurant pages that are now serving malware
[citation needed]
There are plenty of organizations that actively scan the web for "malware" (aka anything that the almighty machine learning algorithms don't like) and are more than happy to harass the website owner and hosting company until their demands are met.
Security is ultimately a social issue. Technical means are only one way to improve it and can never solve it 100%. You must never loose sight of the cost imposed by tecnological security solutions versus what improvement they actually offer.
serbuvlad 3 days ago [-]
I'm really curious as to what you see as the disadvantages of TLS. Sure, the advantages are minor for some services and critical for other services.
However, if you already have bought a domain name, the cost of setting up TLS is basically 0. You just run certbot and give it the domains you want to license. It will set up auto-renew and even edit your Apache/NGINX configs to enable TLS.
Sure, TLS standards rotate. But that just means you have to update Apache/NGINX every like 5 years. Hardly a barrier for most people imo.
dijit 3 days ago [-]
Its better than it was, but TLS has a lot more knobs to fail than even a basic http server does; theres a whole host of handoff thats happening and running multiple sites is fraught with minor issues.
certbot is a python program, better hope it keeps working- it’s definitely not kept working for me and I’m a seasoned sysadmin. a combination of my python environment becoming outdated (making updates impossible) and a deprecation of a critical API needed for it to work.
The #1 cause of issues with a hobby website: darkscience.net is that it refuses to negotiate on Chrome because the TLS suites are considered too old, yet in 2020 I was scoring A+ on Qualys SSL report.
Its just time, time and effort and its wasted mostly.
The letsencrypt tools are really wonderful, just pray they don’t break, and be ready to reinstall everything from scratch at some point.
ndsipa_pomu 3 days ago [-]
> certbot is a python program, better hope it keeps working- it’s definitely not kept working for me and I’m a seasoned sysadmin. a combination of my python environment becoming outdated (making updates impossible) and a deprecation of a critical API needed for it to work.
You could try out acme.sh that's written purely in shell. It's extremely capable and supports DNS challenge and multiple providers
> certbot is a python program, better hope it keeps working
There is also https://github.com/srvrco/getssl which is a bash script. I have lightly audited it years ago and it did not seem to upload your private keys anywhere... I've used it occasionally, but I don't let it run as root, so I need to copy the retrieved certs into the the server config manually.
dijit 3 days ago [-]
Theres a bunch of alternative clients and I’ve tried many.
Larger point is regarding the fact that its required for what amounts to a poster on a wall: yes, someone can come along with a pen an alter the poster- but its not worth the effort to secure for most people and will degrade rapidly with such security too.
So, instead they turn to middlemen, or don’t bother.
Theres a myriad of other issues, but, its not as easy as we claim.
homebrewer 3 days ago [-]
Modern http servers (like caddy) do not make it any more difficult than setting up plain http (it's actually the opposite — you have to specify the schema — http:// — in front of the domain name if you do not want https; otherwise you get https + 301 from http).
JoshTriplett 3 days ago [-]
> the cost of setting up TLS is basically 0. You just run certbot
certbot is not even close to the pinnacle of easy TLS setup. Using an HTTP server that fully integrates ACME and tls-alpn-01 is much nicer: tell your server what domain you use, and it automatically obtains a certificate.
taneliv 3 days ago [-]
I'm always reminded about this by being on the other side of the equation with my car.
There is regulation, like mandatory yearly inspections and anyone is only allowed to sell road worthy vehicles. These rules are rather strict, likewise for the driver's license. They aren't impossible to know or understand, but there's a lot of details.
However, when I take it to the shop, whether for that yearly inspection, regular maintenance, or because there's something apparently wrong with it, I never know what to expect in terms of time and money.
Oh, it needs a new thingamajig? I start to mildly sweat, fearing it to cost six hundred like the flux capacitor that had to be replaced last week/month/year and took two weeks to get shipped from another country. "Ninety cents, and we put it in place for no charge, it literally takes ten seconds", like, I love to hear the news, could have saved me from the anguish by giving a hint when I asked about the price! But need a new key? Starting from three hundred fifty, plus one hundred seventy for a backup copy. Like, where do these prices come from? Actually, don't tell me, I'm a software engineer. I know, I know.
I'll just wait until you want your car shop web pages up. Oh, for that you'll need PCI DSS and we can't do that other things because of GDPR. Sorry, my hands are tied here. That'll be four thousand plus tax, mister auto mechanic shop owner.
dns_snek 3 days ago [-]
I don't think that's a good analogy, you're comparing a mass produced product to an individualized B2B service that's going to generate profits for your customer.
taneliv 3 days ago [-]
It's not an analogy. It's asymmetric warfare.
sureIy 3 days ago [-]
Irrelevant.
Safe transfer should be the default.
Your argument is akin to "I don't have anything to hide."
You just do it and don't think about it. Modern servers and services make this completely transparent.
The kebab guy doesn't need to worry about this as long as they're not fooled into buying from mala fide hosting companies who tries to upsell you on something that should be the baseline.
clan 3 days ago [-]
Nah ah. Not.
While we might be able to find common ground in the statement that "safe transfer should be the default", we will differ on the definition of "safe".
Unfortunately these discussions often end up in techno-babble. Especially here on HN were we tend to enjoy rather binary viewpoints without too many shades of gray.
Try being your own devils advocate: "What if I have something to hide?".
Then deal with that. Legitimately. Reasonably. Unless you are an anarkist I assume that we can agree that we need authoraties. A legal framework. Policing.
So I 100% support Let's Encrypt and what they have done to destroy the certificate racket. That is a force of good!
But I do not think it was a healthy thing that the browsers (and Google search results) "forced" the world defacto to TLS only.
Why? Look at the list of Trusted Root Certificates in the big OS and browsers. You are telling me only good guys are listed? None here are or can be influenced by state actors?
But that is the good kind of MITM? This then hinges on your definition of "safe transport". Only the anarkist can win against the government. I am not.
It might sound like I am in the "I do not have anything to hide" camp. I am not that naive. But I am firmly in the "I prefer more scrutiny when I have something to hide". Because the measures the authorities needs to employ today are too draconian for my liking.
I preferred the risk of MITM on an ISP level to what the authoraties need to do now to stay in control. We have not eliminated MITM. Just made it harder. And we forgot to discuss legitimate reasons for MITM because "bad".
This is not a "technical" discussion on the fine details of TLS or not. But should be a discussion about the societal changes this causes. We need locks to keep the creeps out but still wants the police to gain access. The current system does not enable that in a healthy way but rather erodes trust.
Us binary people can define clear simple technical solutions. But the rest of the world is quite messy. And us bit twiddlers tend to shy away from that and then ignore the push-back to our actions.
We cannot have a sober conversation unless we depart from the "encrypt everything" is technically good and then that is set in stone. But here we are: Writing off arguments as irrelevant.
dspillett 3 days ago [-]
Or worse: people who still go on and on about how self-signed certificates should be accepted by browsers, and can't be convinced that blind-trust-no-first-use is lousy security.
They usually counter with “but SSH uses TOFU” because they don't see, and can't be convinced of, the problem of not verifying the server key signature⁰. I can be fairly sure that I'm talking to the daemon that I've just setup myself without explicitly checking the signature¹, but that particular side-channel assurance doesn't apply to, for example, a client connecting to our SFTP endpoint for the first time² to send us sensitive data.
--
[0] Basically, they get away with doing SSH wrong, and want to get away with doing HTTPS wrong the same way.
[1] Though I still should, really, and actually do in DayJob.
[2] Surprisingly few banks' tech teams bother to verify SSH server signatures on first connection, I know because the ones in our documentation were wrong for a time and no one queried the matter before I noticed it when reviewing that documentation while adding further details. I doubt they'd even notice the signature changing unexpectedly even though that could mean something very serious is going on.
guappa 3 days ago [-]
My letsencrypt cert, despite all my attempts, works fine with browsers but WILL NOT work with wget/curl/python/whatever.
Plus setting up letsencrypt isn't really really easy. Last time it was failing because I had disabled HTTP on port 80 entirely on my server… but letsencrypt uses that to verify that my website has the magic file. So I had to make a script to turn it on for 5 minutes around the time when the certificate gets renewed. -_-'
None of this is easy or quick, and people have other stuff to do than to worry about completely hypothetical attacks on their blog.
mmsc 3 days ago [-]
>letsencrypt uses that to verify that my website has the magic file.
So, instead, use the other authentication methods. For example, DNS.
guappa 3 days ago [-]
Is that easier to configure? (no it isn't)
mmsc 3 days ago [-]
Setting a single DNS record which doesn't need to be change is more difficult than setting a crontab to open port 80 "around the time you expect the ACME challenge"?
How's that?
ozim 2 days ago [-]
Not really hypothetical.
Google "isp injecting ads", well most of it is from 10 years ago - but that is because now we have TLS everywhere.
And it is not attack on your blog but on readers of your blog, well your blog gets the blame of course in case they would be infected by malware or see adult ads.
account42 3 days ago [-]
In general if you need to resort to ad hominens like calling your detractors weirdos then maybe your position isn't as justified as you want to believe.
wannacboatmovie 3 days ago [-]
Can't believe the HTTPS everywhere cargo cult still can't get it through their skulls there is still a place and use cases for plaintext HTTP. In some cases, CRLs for example, they shall not be served over HTTPS.
account42 3 days ago [-]
I'm kinda mixed on LE.
It's nice that you can now get free TLS certs without having to resort to shady outfits like StartSSL. This allows any website to easily move to HTTPS, which has basically elimated sensitive data (including logins) from being sent over unencrypted connections.
On the otherhand, this reinforces the inherently proken trust model of TLS certificates where any certificate authority (and a lot of them are controlled by outright hostile entities) has the ability to issue certificates for your domain without your involvement. Yes there are tons of kludges to try and mitigate this design flaw (CAA records, certificate transparency) but they don't 100% solve the issue. If not for LE perhaps there would have been more motivation to implement support for a saner trust mechanism by now that limmits certificate issuance to those entities who actually have any authority to decide over domain ownership, like with DNSSEC+DANE.
I'm also concerned with the (intentional) lack of backwards compatibility with moving sites to TLS, which is not just a one time TLS on/off issue but a continual deprecation of protocols and ciphers. This is warranted for things that need to be secure like banking or email but shouldn't really be needed to view a recipe or other similar static and non-critical information. Concerns about network operators inserting ads or other shit are better solved with regulation.
rocqua 3 days ago [-]
> If not for LE perhaps there would have been more motivation to implement support for a saner trust mechanism by now
I would argue that LE has only highlighted these problems, and now actually causes people with power to worry about them.
There is a chance we would have gotten something better than TLS if the lack of LE kept certificates a pain. But that seems unlikely to me. Because the fundamental problem remains hard.
selectnull 3 days ago [-]
What I'm most thankful is the ACME protocol.
Does anyone remember how we renewed certificates before LE? Yeah, private keys were being sent via email as zip attachments. That was a security charade. And as far as I know, it was a norm among CAs (I remember working with several).
Thank you Let's Encrypt.
ta1243 3 days ago [-]
Just handholding a renewal with globalsign
I generate the new key on the server as part of the csr creation process. I run it on the server itself so the key never leaves the server's internal storage.
CSR gets sent off to globalsign (via a third party because #largeCompany), then a couple of days later I get the certificate back and apply to the server
Would love to use ACME instead, and store the key in memory (ramdrive etc), but these are the downsides of working for a company less agile than an oil-tanker
chrismorgan 3 days ago [-]
What of Certificate Signing Requests? The whole purpose was that you wouldn’t send private keys around.
(I was only slightly involved with a couple of TLS certificates before then, and certainly they enforced the CSR approach, but maybe such terrible practice was more common in the real world that I knew.)
selectnull 3 days ago [-]
My memory of the whole process is kinda fuzzy, you're probably right about CSRs. Hopefully the private keys were not sent around via unencrypted email.
But the point still stands: the whole process was a nightmare, no automation, error prone, renewal easily forgetable...
The large companies could have had a staff to manage all that. I was just a solo developer managing my own projects, and it was a hassle.
nailer 3 days ago [-]
Regardless of whether you use LE or not, you would not ever send a private key in a zip file rather a public key.
jillesvangurp 3 days ago [-]
I still have to go through that bs with some of my setups. Load balancers in cloud environments don't tend to integrate easily with external ACME providers like letsencrypt and the internal ones require moving your domain to them which doesn't always work. And not all cloud providers even have this. Most of them seem to treat ACME as an afterthought.
You can sort of do some hacks with scripting this together via things like terraform, cron jobs, or whatever. But it gets ugly and the failure modes are that your site stops working if for whatever reason the certificates fail to renew (I've had this happen), which courtesy of really short life times for certificates is of course often.
So, I paid the wildcard certificate tax a few days ago so I don't have to break my brain over this. A couple of hundred. Makes me feel dirty but it really isn't worth days of my time to dodge this for the cost of effectively < 2 hours of my time in $. Twenty minute job to issue the csr, get the certificate and copy it over to the relevant load balancers.
globular-toast 3 days ago [-]
How many of those CAs are still in everyone's trust stores?
tialaramex 3 days ago [-]
> Yeah, private keys were being sent via email as zip attachments.
Internally, perhaps. And also on a small scale maybe with CA "resellers" who were often shady outfits which were in it for a quick buck and didn't much care about the rules.
But as a formal issuance mechanism I very much doubt it. The public CAs are prohibited from knowing the private key for a certificate they issue. Indeed there's a fun incident some years back where a reseller (who have been squirrelling away such private keys) just sends them all to the issuing CA, apparently thinking this is some sort of trump card - and so the issuing CA just... revokes all those certificates immediately because they're prohibited from knowing these private keys.
The correct thing to do, and indeed the thing ACME is doing, although not the interesting part of the protocol, is to produce a Certificate Signing Request. This data structure goes roughly as follows: Dear Certificate Authority, I am Some Internet Name [and maybe more than one], and here is some other facts you may be entitled to certify about me. You will observe that this document is signed, proving I know a Private Key P. Please issue me a certificate, with my name and other details, showing that you associate those details with this key P which you don't know. Signed, P.
This actually means (with ACME or without) that you can successfully air gap the certificate issuance process, with the machine that knows the private key actually never talking to a Certificate Authority at all and the private key never leaving that machine. That's not how most people do it because they aren't paranoid, but it's been eminently possible for decades.
JoshTriplett 3 days ago [-]
> Indeed there's a fun incident some years back where a reseller (who have been squirrelling away such private keys) just sends them all to the issuing CA, apparently thinking this is some sort of trump card - and so the issuing CA just... revokes all those certificates immediately because they're prohibited from knowing these private keys.
That sounds like a fun story. I'd love to read the post-mortem if it's public.
Yup. Trustico. As usual my preference is to avoid caring whether people are malevolent or simply incompetent, by judging on the results of their actions not guessing their unknowable mental state, so hey, maybe Trustico incompetently believed it was a good idea to know private keys (it is not) and incompetently acted in a way they thought was in their customers' best interests (it was not) and so they're in the doghouse for that reason.
[Edited: I originally said Trustico was out of business, but astoundingly the company is still trading. I have no Earthly idea why you would pay incompetent people to do something that's actually zero cost at point of use, but er... OK]
account42 3 days ago [-]
According to that article Trustico wanted the certs revoked and intentionally send the keys to DigiCert in order to get them to act. While they still shouldn't have had those keys in the first place it sounds like the "trump card" worked here.
tialaramex 3 days ago [-]
At the time my guess was that Trustico thought if the certificates have to be revoked they get their money back, and I can't imagine DigiCert's contracts are bad enough that a customer can get their money back if the customer screws up, but I have not read the contract.
The claims from Trustico are very silly. They want their customers to believe everything is fine, and yet the only possible way for this event to even occur is that Trustico are at best incompetent. To me this seems like one of those Gerald Ratner things where you make it clear that your product is garbage and so, usually the result is that your customers won't buy it because if they believe you it's garbage and if they don't believe you they won't want your product anyway - but whereas Ratner more or less destroyed a successful business, Trustico is still going.
gloosx 3 days ago [-]
I really wish something like this comes up for the desktop certification world as well. Microsoft just went full insane mode with their current requirements, and their certificate plugs are making more money than ever without lifting a finger.
So funny that all of their security, vetting and endless verifications are standing on a single passport photo sent over an email to this day.
brchr 3 days ago [-]
Peter Eckersley (1978-2022) was posthumously inducted into the Internet Hall of Fame for his founding work on Let’s Encrypt. The Internet is a better place because of Peter (and his many collaborators and colleagues).
Vint Cerf & Bob Kahn (TCP/IP), Paul Baran (packet switching), Tim Berners-Lee (WWW), Marc Andreesen (Netscape), Brewster Kahle (Internet Archive), Douglas Engelbart (hypertext), Aaron Swartz (RSS, Creative Commons), Richard Stallman (GNU, free software movement), Van Jacobson (TCP/IP congestion control), Jimmy Wales (Wikipedia), Mitchell Baker (Mozilla), Linus Torvalds (Linux)...
...but you’re missing the point of my comment, which is simply to acknowledge and honor (my late dear friend) Peter.
usr1106 3 days ago [-]
Ah, I missed Linus Torvalds and you might have missed Bob Metcalfe (Ethernet) and Jon Postel (RFC work).
My point was not do criticize the achievements of the work of any of those people.
1. I was not actively aware that this hall exists
2. I am mostly critical to such awards in general. I have noted that several companies receiving the "Export company of the year" here in this country (doesn't matter which one) have went bust a couple of years later. I received the "hacker of the year" award at my workplace some years ago. It was supposed to hang with all previous awards in the cafeteria. I did not like that and "forgot" it at home. I quit the company a year later anyway.
Edit: Forgot that I worked for the "software product of the year" twice in my life. One needed heavy, painful architectural rework 3 years later. The other was Series 60. People old enough know how that went, killed a global market leader.
computergert 3 days ago [-]
Coincidentally I just got an email from a potential client, Dutch governmental institution, that they don’t want me to use Letsencrypt. They prefer paying for a certificate themselves. Not sure why, apparently they don’t trust it.
CarpaDorada 3 days ago [-]
A lot of people are not aware that HTTPS certificates do not necessarily guard you from certain types of attacks like DNS injection. You can see <https://www.youtube.com/watch?v=exy5JwAU8qk> for one example where an attack campaign called DNSPionage obtained valid certificates for their attacks.
To explain the issue with HTTPS certificates simply, issuance is automated and rests on the security of DNS, which is achieved via DNSSEC and most do not implement.
ta1243 3 days ago [-]
Technically it's an attack against the certificate issuing authority, bypassing their authorisation checks (is this person really authorised to issue a certificate for the domain).
Trouble is even CAA entries won't help here (if you're spoofing A records, you can spoof CAA records too). DNSSEC might help against this, I don't know enough about DNS though.
Another type of attack is an IP hijack, which allows you to pass things like http authentication (the normal ACME method), but won't bypass CAA records. Can't use letsencrypt to issue a cert - even if you own the IP address my A or AAAA records point to - if my CAA doesn't have letsenctypt as an approved issuer.
CarpaDorada 3 days ago [-]
With DNSSEC you can be certain that the response you got was issued by the nameserver that is claimed (well, by someone who owns the private key). The domain owner, and registrar can both be at fault, the CA is the last entity to blame because they are performing an automated check of domain ownership. For maximum security you'd want to buy your own TLD as my YT video talks about, to circumvent any other registries, registry wholesalers, and registrars' security models, but an adequate protection for most is to use registry/registrar lock and implement DNSSEC correctly. IP hijack will then not work when all of the above is done correctly.
Another option is manual certificate issuance with a CA whose security model is better than yours, but not implementing DNSSEC leaves you open to other attacks.
tptacek 3 days ago [-]
Misissuance from direct DNS spoofing basically never happens. When the DNS is used to misissue a certificate, what has normally happened is a registrar account has been phished. Direct DNS spoofing is an exotic attack. Further: DNSSEC has only a partial fix for it, and the WebPKI has non-DNS-dependent mitigations (most obviously CT, but also multi-perspective DNS lookup, which is apparently going to be a BR next year).
Generally speaking, setting up DNSSEC is probably a bad move for most sites.
ta1243 2 days ago [-]
CT is great, but you do need to look for certificates issues for your domains
lambdaone 3 days ago [-]
Let's Encrypt is a massive achievement, and is now essential infrastructure.
Basing it on an open protocol, so it doesn't become a single point of failure, was a clever idea that allows the idea to survive the demise of any single organization.
May there be many more such anniversaries.
INTPenis 3 days ago [-]
Config management took me many years to adopt, containers took me about 6 years to warm up to. But LE was something I jumped on immediately. I had worked in web hosting for 10 years already when it came out so I remember faxing your driver's license in order to validate a TLS cert. It just felt like such a scam for so long that these CAs were over charging for something that is just a key signing.
But I guess automation and standards had to catch up in order for LE to securely setup their CA.
bigtex 3 days ago [-]
Let's Encrypt helped reduce our OUTRAGEOUS Entrust bill(legacy vendor, I didn't pick them, they had insane security protocols for a small company who just needed SSL certs). We had a 4 yr/$14k contract for about 11 certs. Now our SSL is near 0, except for a cert for SSRS that is hard to automate with LE.
xnx 3 days ago [-]
Are there any areas today similar to the SSL of 10 years ago that a service like Let's Encrypt could remedy? I see a lot of subscription apps that could pretty easily be replaced by free, non-subscription, ones, but I don't know of anything that widespread.
pplonski86 3 days ago [-]
Let's encrypt saved me :) I love to use it with certbot in docker-compose :) deploying really can be simple
KronisLV 3 days ago [-]
Here’s to 10 more years! With web servers like Caddy, software like certbot and even something like Apache2 getting mod_md, I’d say we’re in a pretty good spot!
That said, I’m wondering why there aren’t 10 or so popular alternatives to LE, since that seems to be the landscape for domain registrars, for example.
stephenr 3 days ago [-]
I really wish they would finally branch out and offer S/MIME certificates. Good email clients support them out of the box, it's just a PITA to get them if you don't want to order 100 at a time or something equally ridiculous for SME/individuals.
account42 3 days ago [-]
Would frequent rotation be reasonable for S/MIME certs though?
stephenr 3 days ago [-]
There's nothing specifically that says S/MIME certs would need to have the same 90-day expiration date, but even if they did, I'm making a basic assumption that if there were a standardised, free API to issue S/MIME certs, major email clients would build-in a client to request a certificate - heck it might even prompt major email providers to offer their own solutions for certs, to compete with alternatives that supported using LE certs.
Tepix 3 days ago [-]
Once per year or less.
Remember to decrypt messages, you need to keep your old certificates/keys around.
You can request a new certificate with the same key but i'm not sure that's a good safety practice.
kome 3 days ago [-]
thank you Edward Snowden
vaylian 3 days ago [-]
I wanted to post that exact comment.
aurareturn 3 days ago [-]
People talk about paying for certificates but one major pain point solved by PaaS companies over the last 5 years is automatically adding certificates and renewing them for your app deployments. It saves a huge amount of headache.
In 2024, if your PaaS does not have automated encryption for deploys, I will never use it.
lakomen 3 days ago [-]
Time flies when you're having fun.
Congratulations
_0xdd 3 days ago [-]
Such an awesome service (and protocol!)
Havoc 3 days ago [-]
Reminder that they donation dependent
3 days ago [-]
wannacboatmovie 3 days ago [-]
Nothing makes me trust a site with my payment info more than seeing a LE or domain-validated certificate with no ownership details in the DN.
sunaookami 3 days ago [-]
HTTPS does not validate the trustworthiness of a site. Never has and never will. It only validates that the site has not been tampered with during transfer. Phishing sites can also have HTTPS, that doesn't make them trustworthy.
jonathantf2 3 days ago [-]
Google.com (and my bank) use a DN certificate, if it's good enough for them it's good enough for anyone.
aaomidi 3 days ago [-]
The rate of misissuance of EV and OV is much higher than DV.
wannacboatmovie 3 days ago [-]
Source? I'm not questioning it, I'd like to know more. DV always seemed vulnerable to DNS tampering.
ta1243 3 days ago [-]
And EV is vulnerable to a fancy looking fax (remember them?)
Do you really check your site has an EV every single time? Especially now browsers treat them the same?
If not, how do you know someone hasn't got a DV certificate for this specific visit?
Scott Helme has a thorough takedown of them, and that was 7 years ago when they were still a thing.
EV and OV when it includes dns names still requires domain control validation anyway.
EV certs are generally manually verified. This means there’s a human factor in the middle of this process. DV certs can, and should, be fully automated.
I remember a time when having an HTTPS connection was for "serious" projects only because the cost of the certificate was much higher than the domain. You go commando and if it sticks then you purchase a certificate for a 100 bucks or something.
We were looking for a SSL provider that had > 1 year old certs AND supported ACME... for some reason we ended up with SSL.com that did support ACME for longer lasting certs; however, there was some minor incompatibilities in how kubernetes cert-manager implemented ACME and how SSL.com implemented ACME; we ended up debugging SSL.com ACME protocol implementation.
Fun. We should have just clicked once per 3 years, better than debugging third parties APIs.
No, I don't remember the details and they are all lost in my old work emails.
(Nowadays I think zerossl.com also supports ACME for >1 year certs? but they did not back then. edit: no they still don't, it's just SSL.com I think)
Why are (some) banks always completely clueless about these things? Validating ownership of the domain more often (and with an entirely automated provisioning set-up that has no human weak links) can only be a good thing.
Perhaps the banking sector will finally enter the 21st century in another ten years?
They have these really, really long lists what all needs to be secured and how. Some of it is reasonable, some of it is bonkers, there is way too much of that stuff, and it overall increases the price of any solution 10x at least.
But OTOH I can hardly blame them, failures can be catastrophic there, as they deal with real money directly and can be held liable for failures. So they don't really care about security, and more about covering their asses.
Some of it is truly bonkers and never was good practise, but much of the irritating stuff is simply out-of-date advice. The banks tend to be very slow to change unless something happens that affects (or directly threatens to affect) the bottom line, or puts them in the news unfavourably.
Of course some of it is bonkers, like HSBC and FirstDirect changing the auth for my personal accounts from “up to 9 case-sensitive alpha-numeric characters” (already considered bad practise for some years) to “6 digits”, and assuring me that this is just as secure as before…
I read it as “we have been asked to integrate an ancient system that we can't update (or more honestly in many cases: can't get the higher-ups to agree to pay to update), so are bringing out other systems down to the lowest common denominator”. That sort of thing happens too often when two organisations (or departments within one) that have different procedures, merge or otherwise start sharing resources they didn't previously.
One of the practices was pathetic to the point of being funny: you had to input specific characters of your password (2nd, 4th, 6th, etc - this was changing at each login) AND there was a short timeout. My children probably learned a few new words when I was logging in.
Some time later they silently removed the first one.
I wonder if this would be an opportunity for revenue for Let's Encrypt? "We do 90-day automated-renewal certificates for free for everyone. If you're in an unusual environment where you need certificates with longer validity, we offer paid services you can use."
I think there's still incentive alignment here. Getting people moved from the "purchase 1 year certificate" world (which is apparently still required in some financial contexts) into the ACME-based world provides a path for making a regulatory argument that it'd be easy for such entities to switch over to shorter-lived certificates because the ACME infrastructure is right there.
The only good thing dealing with certificate resellers at the time was that they where really flexible in a lot of ways. We got our EV cert refunded, or "store credit" and used the money to buy normal certificates.
Extended Validation can still play a role in a corporate's IT control framework; the extended validation is essentially a check-of-paperwork that then doesn't need to be performed by your own auditor. Some EV certificates also come with some (probably completely useless) liability insurance.
[1] https://chromium.googlesource.com/chromium/src/%2B/HEAD/docs...
Warranties / insurance on SSL certificates typically only pay out if a certificate is issued improperly, often in conjunction with other conditions like a financial loss directly resulting from the misissuance. Realistically, any screwup serious enough to result in that warranty paying out would also result in the CA being abruptly removed from browser root certificate programs.
And another fun one unrelated to signing was when they tried to trademark "Let's Encrypt" in 2015.
But yes, it is not a common issue and effort would be better focused on improving site security in other ways. (unlike the rest of my comment, this line isn't sarcasm.)
There are some scenarios where you still have to employ EV certificates, e.g. code signing.
https://groups.google.com/a/chromium.org/g/security-dev/c/h1...
https://groups.google.com/g/firefox-dev/c/6wAg_PpnlY4
You'll still find people online clamoring EV certificates are worth anything more than $0 but you can ignore them just as well.
Not in any jurisdiction I'm aware of, though it's a big world so it wouldn't shock me if some small corner of it has bad laws.
> and also obligatory for rolling out as MasterCard/Visa merchant by their anti-fraud requirements
PCI DSS does not require EV certificates.
They don't recognize LE nor AWS's certs. Only the big paid ones. Such an annoying process too - to pay, to obtain and update the certs.
Nobody is like "Oh, the Jones Act ensures high quality ships" because it doesn't, the Jones Act just ensures that you're going to use those US shipyards, no matter what.
What about ZeroSSL, which is basically interchangeable with Let's Encrypt?
I'm really not a fan of it but I'm happier paying for a one year cert than doing that
If your DNS provider doesn't have an API, that seems like a separate issue but one that is well worth your organization's time if you're working in the enterprise!
(looking in to setting this up for a bunch of domains at work)
Lets not talk about key delivery. We will get back the admin cost and of all that in a year if we tunnel them through one of our LBs.
Let’s Encrypt is the best thing to happen to the web in at least a decade.
Before them I never used SSL for anything, because the cost/benefit ratio was just not there for my services.
Since then, I never not use it.
Glad this problem just got completely resolved.
Today is roughly the ten year anniversary of when we publicly announced our intention to launch Let's Encrypt, but next year is the ten year anniversary of when Let's Encrypt actually issued its first certificate:
https://letsencrypt.org/2015/09/14/our-first-cert/
In December of 2015 (~9 years ago today) is was made available to everyone, no invitation needed:
https://letsencrypt.org/2015/12/03/entering-public-beta/
Can't believe its been ten years.
TLS is fairly computationally intensive - sure, not a big deal now because everyone is using superfast devices but try browsing the internet with a Pentium 4 or something. You won't be able to because there is no AES instruction set support accelerating the keyshake so it's hilariously slow.
It also encourages memoryholing old websites which aren't maintained - priceless knowledge is often lost because websites go down because no one is maintaining them. On my hard drive, I have a fair amount of stuff which I'm reasonably confident doesn't exist anywhere on the Internet anymore.... if my drives fail, that knowledge will be lost forever.
It is also a very centralised model - if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
It also discourages naive experimentation - sure, if you know how, you can MitM your own connection but for the not very technical but curious user, that's probably an insurmountable roadblock.
Biggest problem that Edward Snowden uncovered was - this stuff was happening and was happening en-mass FULLY AUTOMATED - it wasn't some kid in basement getting MitM on your WiFi after hours of tinkering.
It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Transport security doesn't make 0-days any less of a concern.
> It was also happening fully automated as shitty ISPs were injecting their ads into your traffic, so your fluffy kittens page was used to serve ads by bad people.
That's a societal/legal problem. Trying to solve those with technological means is generally not a good idea.
> There is no "balance" if you understand bad people are going to swap your "fluffy kittens page" into "hardcore porn" only if they get hands on it. Bad people will include 0-day malware to target anyone and everyone just in case they can earn money on it.
The only people who can realistically MITM your connection are network operators and governments. These can and should be held accountable for their interference. You have no more security that your food wansn't tampered with during transport but somehow you live with that. Similarly security of physical mail is 100% legislative construct.
> You also have to understand don't have any control through which network your "fluffy kitten page" data will pass through - malicious groups were doing multiple times BGP hijacking.
I don't but my ISP does. Solutions for malicious actors interfering with routing are needed irrespective of transport security.
> So saying "well it is just fluffy kitten page my neighbors are checking for the photos I post" seems like there is a lot of explaining on how Internet is working to be done.
Not at all - unless you are also epecting them to have their fluffy kitten postcards checked for Anthrax. In general, it is security people who often need to touch grass because the security model they are working with is entirely divorced from reality.
I am going to cross the street in front of that speeding car because driver will be held liable when I get hit and die.
If there is not even a possibility to hijack the traffic whole range of things just won’t happen. And holding someone liable is not the solution.
Only if you are talking about actual events in which this is happening as a matter of course. Because that's what it is when ISPs inject ads into plain-text HTTP traffic: a matter of course. It's a bit more like saying that we don't have a way to effectively enforce our laws against maliciously reckless driving so we install a series of speed bumps on the road (it's still not quite the same thing because it doesn't make the reckless driving impossible but it does increase the cost).
But it's not like we're talking about agreeable activity here, anyway. This particular case against TLS sounds like a case that favors criticizing an imperfect solution to widespread negative behavior over criticizing the negative behavior. It seems reasonable to look at the speed bumps (which one may or may not find distasteful) and curse the reckless behavior of those who incentivized their construction.
But that analogy of course runs dry rather quick because you can look both ways when crossing street - on the internet as I mentioned you cannot control where data flows and bad actors already proven that they are doing so.
This is why it is not like overpass that you can build where the need is - because for internet traffic the need is everywhere.
> Transport security doesn't make 0-days any less of a concern.
It does. Each layer of security doesn't eliminate the problem but does make the attack harder.
Mail and food are different in that there are not limitless scalable attacks that can originate anywhere around the globe.
It does make the actual execution of said attacks significantly harder. To actually hit someone's browser, they need to receive your payload. In the naive case, you can stick it on a webserver you control, but how many people are going to randomly visit your website? Most people visit only a handful of domains on a regular visit, and you've got tops a couple days before your exploit is going to be patched.
So you need to get your payload into the responses from those few domains people are actually making requests from. If you can pwn one of them, fantastic. Serve up your 0-day. But those websites are big, and are constantly under attack. That means you're not going to find any low-hanging fruit vulnerability-wise. Your best bet is trying to get one of them to willing serve your payload, maybe in the guise of an ad or something. Tricky, but not impossible.
But before universal https, you have another option: target the delivery chain. If they connect to a network you control? Pwned. If they use a router with bad security defaults that you find a vulnerability in? Pwned. If they use a small municipal ISP that turns out to have skimped on security? Pwned. Hell, you open up a whole attack vector via controlling an intermediate router at the ISP level. That's not to mention targeting DNS servers.
HTTPS dramatically shrinks the attack surface for the mass distribution unwanted payloads down to basically the high-traffic domains and the CA chain. That's a massive reduction.
> The only people who can realistically MITM your connection are network operators and governments.
Literally anyone can be a network operator. It takes minimal hardware. Coffee shop with wifi? Network operator. Dude popping up a wifi hotspot off his phone? Network operator. Sketchy dude in a black hoodie with a raspberry pi bridging the "Starbucks_guest" as "Starbucks Complimentary Wifi"? Network operator. Putting the security of every packet of web traffic onto "network operators" means drastically reducing internet access.
> You have no more security that your food wasn't tampered with during transport but somehow you live with that.
I've yet to hear of a case where some dude in a basement poisoned a CISCO truck without having to even put on pants. Routers get hacked plenty.
HTTPS is an easy, trivial-cost solution that completely eliminates multiple types of threats, several of which are either have major damage to their target or risk mass exposure, or both. Universal HTTPS is like your car beeping at you when you start moving without your seat belt on: kinda annoying when you're doing a small thing in tightly controlled environments, but has an outstanding risk reduction, and can be ignored with a little headache if you really want to.
I can see why the centralisation is suboptimal (or even actively bad if I'm feeling paranoid!), but other schemes (web of trust, etc.) tend to end up far more complicated for the end user (or their UA). So far no one has come up with a practical alternative without some other disadvantage that would block its general adoption.
> if I want to host a website, why do third parties need to issue a certificate for it just so people can connect to it?
Because if we don't trust those few 3rd parties, we end up having to effectively trust every host on the Internet, which means trusting people and trusting all the people is a bad idea.
Some argue that needing a trusted certificate for just a personal page is extreme, but this one of those cases where the greater good has to win out. For instance: if we train people that self-signed certs are fine to trust in some circumstances, they'll end up clicking OK to trust them in circumstances where they really shouldn't. This can seem a bit nanny-ish, but people are often dumb, or just lazy to the point where it is sometimes indistinguishable from dumb (I'm counting myself here!) so need a bit of nannying. And anyway, if your site doesn't take any input then no browser will (yet) complain about plain HTTP.
> It also discourages naive experimentation
When something could affect security, discouraging naive experimentation on the public network is a good thing IMO. Do those experiments more locally, or at least on hosts you don't expect the public to access.
However, I think there is no reason at all that a system that is decentralized is not far _far_ simpler to instantiate for a user (not to mention far more secure and private). Crypto gets a lot of hate on HN, but it seems that it is mostly due to people's dislike of anything dealing with 'currency' systems or financial that touch it. This is a despised opinion here, but I am still actually excited for crypto systems that solve real world problems like TLS certs, DNS, et al.
Iroh seems like a _fantastic_, phenomenal system to showcase this idea. It allows for a very fast decentralized web experience on modern cryptography such as Blake3, QUIC, and so on but doesn't really touch any financial stuff at all. Its simply a good system.
I hope we can slowly move to a system that uses the decntralized consensus algorithms created in the crypto space to remove the trust in (typically big, corporate, and likely backdoored) centralized entities that our system today _requires_ without any alternative.
If the website really isn't maintained, then it's only a matter of time until the server is part of a botnet. Setting up LE for a simple site takes half an hour once.
Beyond that, TLS is also adds additional points of failure. For one, it preventing users from accessing websites that are still operational but have an outdated cert or some other configuration issue. And HSTS even requires browsers to deprive users of the agency to override default policies and access the site anyway.
TLS is also a complex protocol with complex implementations that are prone to can bring their own security issues, e.g. heartbleed.
There are also many cases where there are holes in the security. E.g. old HTTP links, even if they redirect to HTTP, provide an opportunity for interception. Similarly entering domain names without a scheme requires Browsers to either allow downgrade to HTTP or break older sites. The solutions to this (mainly HSTS and HSTS preload) don't scale and bring many new issues (policy lifetimes outlive domain ownership, taking away user agency).
In my ideal world
a) There would be no separate HTTPS URL scheme for secure connections. Cool URIs don't change and the transport security doesn't change the resource you are addressing. A separate protocol doesn't prevent downgrade attacks in all cases anyway (old HTTP URLS, entering domains in the address bar, no indication of TLS version and supported ciphers in the scheme).
b) Trust should be provided in a hierarchical manner, just like domains themselves - e.g. via DNSSEC+DANE.
c) This mechanism would also securely inform browsers about what protocols and ciphers the server supports to allow for backwards compatiblity with older clients (where desired) while preventing downgrade attacks on modern clients.
d) Network operators that interfere with the transmitted data are dealth with legal means (loss of common carrier status at the very least, but ideally the practice should be outright illegal). Unecrypted connections shouldn't allow service providers to get away with scamming you.
The fundamental problem is a question of trust. There’s three ways:
* Well known validation authority (the public TLS model)
* TOFU (the default SSH model)
* Pre-distribute your public keys (the self-signed certificate model)
Are there any alternatives?
If your requirement is that you don’t want to trust a third party, then don’t. You can use self-signed certificates and become your own root of trust. But I think expecting the average user to manually curate their roots of trust is a clearly terrible security UX.
The obvious alternative would be a model where domain validated certificates are issued by the registrar and the registrar only. Certificates should reflect domain ownership as that is the way they are used (mostly).
There is a risk that Let's Encrypt and other "good enough" solutions takes us further from that. There are also many actors with economic interest in the established model, both in the PKI business and consultants where law enforcement are important customers.
If the answer is to walk down the DNS tree, then you have basically arrived at DNSSEC/DANE. However I don’t know enough about it to say why it is not more widely used.
Utilizing DNS, whois, or a purpose built protocol directly would alleviate the problem altogether but should probably be done by way of an updated TLS specification.
Any realistic migration should probably exist alongside the public CA model for a very long time.
https://news.ycombinator.com/item?id=41916478
There's issues with it, but it is an alternative model, and I could see it being made to work.
I don’t see how it has too many advantages (for the internet) over creating your own CA. If you have a mutually trusted group of people, then they can all share the private key and sign whatever they trust.
I think the main problem is that it doesn’t scale. If party A and party B who have never communicated before want to communicate securely (let’s say from completely different countries), there’s no way they would be able to without a bridge. With central TLS, despite the downsides, that is seamless.
Interest is probably going to be low but not zero - I often play games long after they have been released and sometimes intentionally using older versions that are no longer supported by current mods.
If I do everything perfectly, but the CA I used makes some trivial error which, in the case of my certificate, has no real-world security impact? They can send me an e-mail at 6:40 PM telling me they're revoking my certificate at 2:30 PM the next day. Just what you want to find in your inbox when you get in the next day. I hope you weren't into testing, or staged rollouts, or agreeing deployment windows with your users - you'd better YOLO that change into production without any of that.
Even though it wasn't your mistake, and there's no suggestion you shouldn't have the certificate you have.
As far as the CA/B Forum is concerned, safety-critical systems that can't YOLO changes straight into production with minimal testing and only a few hours of notice don't belong on their PKI infrastructure. You'd better jump to it and fix their mistake right now.
Anyone whose certbot run was between 2pm and 6pm would get their cert revoked the next day at 2pm anyway - even if it was only issued 18 hours ago.
There's also a higher level question: Is this the web we want to be building? One where every site and service has to apply for permission to continue existing every 24 hours? Do we want a web where the barrier to entry for hosting is a round-the-clock ops team, complete with holiday cover? And if you don't have that, you should be using Facebook or Twitter instead?
The lack of understanding from us as technologists for people who would have had a working site and are now forced into either: an oligopoly of site hosting companies, or, for their site to break consistently as TLS standards rotate is one thing that brings me shame about our community.
You can come up with all kinds of reasons to gatekeep website hosting, “they have to update anyway” even when updating means reinstallion of an OS, “its not that hard to rotate” say people with deep knowledge of computers, “just get someone else to do it” say people who have a financial interest in it being that way.
Framing people with legitimate issues as weirdo’s is not as charming as you think it is.
Also the Kebap Shop probably has a form for reservation or ordering, which takes personal information.
True, they are all low risk things, but getting TLS is trivial (since many Webservers etc can do letsencrypt rotation fully automatically) and secure defaults are a good thing.
They’ve nearly all been lost to time now though, if a shop has a web-presence it will be through a provider such as “bokabord”, doordash, ubereats (as mentioned), some of whom charge up to 30% of anything booked/ordered via the web.
But, I guess no MITM can manipulate prices… except, by charging…
If you care about the integrity of the conveyed information you need TLS. If you don't, you wouldn't have published a website in the first place.
A while back I've seen a wordpress site for a podcast without https where people also argued it doesn't need it. They had banking information for donations on that site.
Sometimes I wish every party involved in transporting packets on the internet would just mangle all unencrypted http that they see, if only to make a point...
Like, "telnet textfiles.com 80" then "GET / HTTP/1.0", <enter>, "Location: textfile.com" <enter><enter> and you have the page.
What would be the point of making these unencrypted sites disappear?
I'd argue that that is a most likely objectively false statement and that the domain owner is in no position to authoritatively answer the question if it has ever served ads in that time. As it is served without TLS any party involved in the transportation of the data can mess with its content and e.g. insert ads. There are a number of reports of ISPs having done exactly that in the past, and some might still do it today. Therefore it is very likely that textfiles.com as shown in someones browser has indeed had ads at some point in time, even if the one controlling the domain didn't insert them.
Textfiles also contains donation links for PayPal and Venmo. That is an attractive target to replace with something else.
And that is precisely the point: without TLS you do not have any authority over what anyone sees when visiting your website. If you don't care about that then fine, my comment about mangling all http traffic was a bit of a hyperbole. But don't be surprised when it happens anyway and donations meant for you go to someone else instead.
If you browse through your smart TV, and the smart TV overlays an ad over the browser window, or to the side, is that the same as saying the original server is serving those ads? I hope you agree it is not.
If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
I therefore conclude that your interpretation is meaningless.
> "as shown in someones browser"
Which is different than being served by the server, as I believe I have sufficiently demonstrated.
> But don't be surprised when it happens anyway
Jason Scott, who runs that site, will not be surprised.
I agree it is not. That is why I didn't say that the original server served ads, but that the _domain_ served ads. Without TLS you don't have authority over what your domain serves, with TLS you do (well, in the absence of rogue CAs, against which we have a somewhat good system in place).
> If you use a web browser from a phone vendor who has a special Chromium build which inserts ads client-side in the browser, do you say that the server is serving those ads? Do you know that absolutely no browser vendors, including for low-cost phones, do this?
This is simply a compromised device.
> If your ISP requires you configure your browser to use their proxy service, and that proxy service can insert ads, do you say that the server is serving those ads? Are you absolutely sure no ISPs have this requirement?
This is an ISP giving you instructions to compromise your device.
> If you use a service where you can email it a URL and it emails you the PDF of the web site, with some advertising at the bottom of each page, do you say the original server is really the one serving those ads?
No, in this case I am clearly no longer looking at the website, but asking a third-party to convey it to me with whatever changes it makes to it.
> If you read my web site though archive.org, and archive.org has its "please donate to us" ad, do you really say that my site is serving those ads?
No, archive.org is then serving an ad on their own domain, while simultaneously showing an archived version of your website, the correctness of which I have to trust archive.org for.
> Is there any web site which you can guarantee it's impossible for any possible user, no matter the hardware or connection, to see ads which did not come from the original server as long as the server has TLS? I find that impossible to believe.
Fair point. I should have said that I additionally expect the client device to be uncompromised, otherwise all odds are off anyway as your examples show. The implicit scenario I was talking about includes an end-user using an uncompromised device and putting your domain into their browsers URL bar or making a direct http connection to your domain in some other way.
They want the historical integrity, which includes the lack of data integrity that you want.
In practice, many pages are also intentionally compromised by their authors (e.g. including malware scripts from Google), and devices are similarly compromised, so end-to-end "integrity" of the page isn't something the device owner even necessarily wants (c.f. privoxy).
The cryptography community would have you believe that the only solution to getting scammed is encryption. It isn't.
NSA was installing physical devices at network providers that was scouring through all information - they did not have to have Agent Smith opening envelopes or even looking at them. Keep in mind criminals could do the same as well just pay off some employees at provider and also not all network providers are in countries where law enforcement works - and as mentioned your data can go through any of such network providers.
If I send physical mail I can be sure it is not going through Bangkok unless I specifically send it with destination that requires it to go there.
Nothing, really. But for physical mail the attacks against it don't scale nearly as well: you would need to insert yourself physically into the transportation chain and do physical work to mess with the content. Messing with mail is also taken much more seriously as an offense in many places, while laws are not as strict for network traffic generally.
For telephone conversations, at least until somewhat recently, the fact that synthesizing convincing speech in real time was not really feasible (especially not if you tried to imitate someones speech) ensured some integrity of the conversation. That has changed, though.
And prices are more likely to be simply outdated than modified by a malicious entity. Your concerns are not based in reality.
It’s like a vaccine. We vaccinated most of the web against a very bad problem, and that has stopped the problem from happening in the first place. If 90% were still on http, way more ISPs would insert ads.
There are more than enough forgotten kebab shop restaurant pages that are now serving malware because they never updated WordPress that an out of date certificate warning is a very good "heads up, this site hasn't been maintained in 6 years"
If we're talking hosting even a static HTML file without using a site hosting company, that already requires so much technical knowledge (Domain purchasing, DNS, purchasing a static IP from your ISP, server software which again requires vuln updates) that said person will be able to update a TLS cert without any issue.
[citation needed]
There are plenty of organizations that actively scan the web for "malware" (aka anything that the almighty machine learning algorithms don't like) and are more than happy to harass the website owner and hosting company until their demands are met.
Security is ultimately a social issue. Technical means are only one way to improve it and can never solve it 100%. You must never loose sight of the cost imposed by tecnological security solutions versus what improvement they actually offer.
However, if you already have bought a domain name, the cost of setting up TLS is basically 0. You just run certbot and give it the domains you want to license. It will set up auto-renew and even edit your Apache/NGINX configs to enable TLS.
Sure, TLS standards rotate. But that just means you have to update Apache/NGINX every like 5 years. Hardly a barrier for most people imo.
certbot is a python program, better hope it keeps working- it’s definitely not kept working for me and I’m a seasoned sysadmin. a combination of my python environment becoming outdated (making updates impossible) and a deprecation of a critical API needed for it to work.
The #1 cause of issues with a hobby website: darkscience.net is that it refuses to negotiate on Chrome because the TLS suites are considered too old, yet in 2020 I was scoring A+ on Qualys SSL report.
Its just time, time and effort and its wasted mostly.
The letsencrypt tools are really wonderful, just pray they don’t break, and be ready to reinstall everything from scratch at some point.
You could try out acme.sh that's written purely in shell. It's extremely capable and supports DNS challenge and multiple providers
https://github.com/acmesh-official/acme.sh
There is also https://github.com/srvrco/getssl which is a bash script. I have lightly audited it years ago and it did not seem to upload your private keys anywhere... I've used it occasionally, but I don't let it run as root, so I need to copy the retrieved certs into the the server config manually.
Larger point is regarding the fact that its required for what amounts to a poster on a wall: yes, someone can come along with a pen an alter the poster- but its not worth the effort to secure for most people and will degrade rapidly with such security too.
So, instead they turn to middlemen, or don’t bother.
Theres a myriad of other issues, but, its not as easy as we claim.
certbot is not even close to the pinnacle of easy TLS setup. Using an HTTP server that fully integrates ACME and tls-alpn-01 is much nicer: tell your server what domain you use, and it automatically obtains a certificate.
There is regulation, like mandatory yearly inspections and anyone is only allowed to sell road worthy vehicles. These rules are rather strict, likewise for the driver's license. They aren't impossible to know or understand, but there's a lot of details.
However, when I take it to the shop, whether for that yearly inspection, regular maintenance, or because there's something apparently wrong with it, I never know what to expect in terms of time and money.
Oh, it needs a new thingamajig? I start to mildly sweat, fearing it to cost six hundred like the flux capacitor that had to be replaced last week/month/year and took two weeks to get shipped from another country. "Ninety cents, and we put it in place for no charge, it literally takes ten seconds", like, I love to hear the news, could have saved me from the anguish by giving a hint when I asked about the price! But need a new key? Starting from three hundred fifty, plus one hundred seventy for a backup copy. Like, where do these prices come from? Actually, don't tell me, I'm a software engineer. I know, I know.
I'll just wait until you want your car shop web pages up. Oh, for that you'll need PCI DSS and we can't do that other things because of GDPR. Sorry, my hands are tied here. That'll be four thousand plus tax, mister auto mechanic shop owner.
Safe transfer should be the default.
Your argument is akin to "I don't have anything to hide."
You just do it and don't think about it. Modern servers and services make this completely transparent.
The kebab guy doesn't need to worry about this as long as they're not fooled into buying from mala fide hosting companies who tries to upsell you on something that should be the baseline.
While we might be able to find common ground in the statement that "safe transfer should be the default", we will differ on the definition of "safe".
Unfortunately these discussions often end up in techno-babble. Especially here on HN were we tend to enjoy rather binary viewpoints without too many shades of gray.
Try being your own devils advocate: "What if I have something to hide?".
Then deal with that. Legitimately. Reasonably. Unless you are an anarkist I assume that we can agree that we need authoraties. A legal framework. Policing.
So I 100% support Let's Encrypt and what they have done to destroy the certificate racket. That is a force of good!
But I do not think it was a healthy thing that the browsers (and Google search results) "forced" the world defacto to TLS only.
Why? Look at the list of Trusted Root Certificates in the big OS and browsers. You are telling me only good guys are listed? None here are or can be influenced by state actors?
But that is the good kind of MITM? This then hinges on your definition of "safe transport". Only the anarkist can win against the government. I am not.
It might sound like I am in the "I do not have anything to hide" camp. I am not that naive. But I am firmly in the "I prefer more scrutiny when I have something to hide". Because the measures the authorities needs to employ today are too draconian for my liking.
I preferred the risk of MITM on an ISP level to what the authoraties need to do now to stay in control. We have not eliminated MITM. Just made it harder. And we forgot to discuss legitimate reasons for MITM because "bad".
This is not a "technical" discussion on the fine details of TLS or not. But should be a discussion about the societal changes this causes. We need locks to keep the creeps out but still wants the police to gain access. The current system does not enable that in a healthy way but rather erodes trust.
Us binary people can define clear simple technical solutions. But the rest of the world is quite messy. And us bit twiddlers tend to shy away from that and then ignore the push-back to our actions.
We cannot have a sober conversation unless we depart from the "encrypt everything" is technically good and then that is set in stone. But here we are: Writing off arguments as irrelevant.
They usually counter with “but SSH uses TOFU” because they don't see, and can't be convinced of, the problem of not verifying the server key signature⁰. I can be fairly sure that I'm talking to the daemon that I've just setup myself without explicitly checking the signature¹, but that particular side-channel assurance doesn't apply to, for example, a client connecting to our SFTP endpoint for the first time² to send us sensitive data.
--
[0] Basically, they get away with doing SSH wrong, and want to get away with doing HTTPS wrong the same way.
[1] Though I still should, really, and actually do in DayJob.
[2] Surprisingly few banks' tech teams bother to verify SSH server signatures on first connection, I know because the ones in our documentation were wrong for a time and no one queried the matter before I noticed it when reviewing that documentation while adding further details. I doubt they'd even notice the signature changing unexpectedly even though that could mean something very serious is going on.
Plus setting up letsencrypt isn't really really easy. Last time it was failing because I had disabled HTTP on port 80 entirely on my server… but letsencrypt uses that to verify that my website has the magic file. So I had to make a script to turn it on for 5 minutes around the time when the certificate gets renewed. -_-'
None of this is easy or quick, and people have other stuff to do than to worry about completely hypothetical attacks on their blog.
So, instead, use the other authentication methods. For example, DNS.
How's that?
Google "isp injecting ads", well most of it is from 10 years ago - but that is because now we have TLS everywhere.
And it is not attack on your blog but on readers of your blog, well your blog gets the blame of course in case they would be infected by malware or see adult ads.
It's nice that you can now get free TLS certs without having to resort to shady outfits like StartSSL. This allows any website to easily move to HTTPS, which has basically elimated sensitive data (including logins) from being sent over unencrypted connections.
On the otherhand, this reinforces the inherently proken trust model of TLS certificates where any certificate authority (and a lot of them are controlled by outright hostile entities) has the ability to issue certificates for your domain without your involvement. Yes there are tons of kludges to try and mitigate this design flaw (CAA records, certificate transparency) but they don't 100% solve the issue. If not for LE perhaps there would have been more motivation to implement support for a saner trust mechanism by now that limmits certificate issuance to those entities who actually have any authority to decide over domain ownership, like with DNSSEC+DANE.
I'm also concerned with the (intentional) lack of backwards compatibility with moving sites to TLS, which is not just a one time TLS on/off issue but a continual deprecation of protocols and ciphers. This is warranted for things that need to be secure like banking or email but shouldn't really be needed to view a recipe or other similar static and non-critical information. Concerns about network operators inserting ads or other shit are better solved with regulation.
I would argue that LE has only highlighted these problems, and now actually causes people with power to worry about them.
There is a chance we would have gotten something better than TLS if the lack of LE kept certificates a pain. But that seems unlikely to me. Because the fundamental problem remains hard.
Does anyone remember how we renewed certificates before LE? Yeah, private keys were being sent via email as zip attachments. That was a security charade. And as far as I know, it was a norm among CAs (I remember working with several).
Thank you Let's Encrypt.
I generate the new key on the server as part of the csr creation process. I run it on the server itself so the key never leaves the server's internal storage.
CSR gets sent off to globalsign (via a third party because #largeCompany), then a couple of days later I get the certificate back and apply to the server
Would love to use ACME instead, and store the key in memory (ramdrive etc), but these are the downsides of working for a company less agile than an oil-tanker
(I was only slightly involved with a couple of TLS certificates before then, and certainly they enforced the CSR approach, but maybe such terrible practice was more common in the real world that I knew.)
But the point still stands: the whole process was a nightmare, no automation, error prone, renewal easily forgetable...
The large companies could have had a staff to manage all that. I was just a solo developer managing my own projects, and it was a hassle.
You can sort of do some hacks with scripting this together via things like terraform, cron jobs, or whatever. But it gets ugly and the failure modes are that your site stops working if for whatever reason the certificates fail to renew (I've had this happen), which courtesy of really short life times for certificates is of course often.
So, I paid the wildcard certificate tax a few days ago so I don't have to break my brain over this. A couple of hundred. Makes me feel dirty but it really isn't worth days of my time to dodge this for the cost of effectively < 2 hours of my time in $. Twenty minute job to issue the csr, get the certificate and copy it over to the relevant load balancers.
Internally, perhaps. And also on a small scale maybe with CA "resellers" who were often shady outfits which were in it for a quick buck and didn't much care about the rules.
But as a formal issuance mechanism I very much doubt it. The public CAs are prohibited from knowing the private key for a certificate they issue. Indeed there's a fun incident some years back where a reseller (who have been squirrelling away such private keys) just sends them all to the issuing CA, apparently thinking this is some sort of trump card - and so the issuing CA just... revokes all those certificates immediately because they're prohibited from knowing these private keys.
The correct thing to do, and indeed the thing ACME is doing, although not the interesting part of the protocol, is to produce a Certificate Signing Request. This data structure goes roughly as follows: Dear Certificate Authority, I am Some Internet Name [and maybe more than one], and here is some other facts you may be entitled to certify about me. You will observe that this document is signed, proving I know a Private Key P. Please issue me a certificate, with my name and other details, showing that you associate those details with this key P which you don't know. Signed, P.
This actually means (with ACME or without) that you can successfully air gap the certificate issuance process, with the machine that knows the private key actually never talking to a Certificate Authority at all and the private key never leaving that machine. That's not how most people do it because they aren't paranoid, but it's been eminently possible for decades.
That sounds like a fun story. I'd love to read the post-mortem if it's public.
https://www.theregister.com/2018/03/01/trustico_digicert_sym...
[Edited: I originally said Trustico was out of business, but astoundingly the company is still trading. I have no Earthly idea why you would pay incompetent people to do something that's actually zero cost at point of use, but er... OK]
The claims from Trustico are very silly. They want their customers to believe everything is fine, and yet the only possible way for this event to even occur is that Trustico are at best incompetent. To me this seems like one of those Gerald Ratner things where you make it clear that your product is garbage and so, usually the result is that your customers won't buy it because if they believe you it's garbage and if they don't believe you they won't want your product anyway - but whereas Ratner more or less destroyed a successful business, Trustico is still going.
So funny that all of their security, vetting and endless verifications are standing on a single passport photo sent over an email to this day.
None of them I have ever heard of. Whatever that may mean.
Edit: On the whole list https://www.internethalloffame.org/inductees/all/ I spotted maybe seven names. Still a single digit percentage.
...but you’re missing the point of my comment, which is simply to acknowledge and honor (my late dear friend) Peter.
My point was not do criticize the achievements of the work of any of those people.
1. I was not actively aware that this hall exists
2. I am mostly critical to such awards in general. I have noted that several companies receiving the "Export company of the year" here in this country (doesn't matter which one) have went bust a couple of years later. I received the "hacker of the year" award at my workplace some years ago. It was supposed to hang with all previous awards in the cafeteria. I did not like that and "forgot" it at home. I quit the company a year later anyway.
Edit: Forgot that I worked for the "software product of the year" twice in my life. One needed heavy, painful architectural rework 3 years later. The other was Series 60. People old enough know how that went, killed a global market leader.
To explain the issue with HTTPS certificates simply, issuance is automated and rests on the security of DNS, which is achieved via DNSSEC and most do not implement.
Trouble is even CAA entries won't help here (if you're spoofing A records, you can spoof CAA records too). DNSSEC might help against this, I don't know enough about DNS though.
Another type of attack is an IP hijack, which allows you to pass things like http authentication (the normal ACME method), but won't bypass CAA records. Can't use letsencrypt to issue a cert - even if you own the IP address my A or AAAA records point to - if my CAA doesn't have letsenctypt as an approved issuer.
Another option is manual certificate issuance with a CA whose security model is better than yours, but not implementing DNSSEC leaves you open to other attacks.
Generally speaking, setting up DNSSEC is probably a bad move for most sites.
Basing it on an open protocol, so it doesn't become a single point of failure, was a clever idea that allows the idea to survive the demise of any single organization.
May there be many more such anniversaries.
But I guess automation and standards had to catch up in order for LE to securely setup their CA.
That said, I’m wondering why there aren’t 10 or so popular alternatives to LE, since that seems to be the landscape for domain registrars, for example.
In 2024, if your PaaS does not have automated encryption for deploys, I will never use it.
Do you really check your site has an EV every single time? Especially now browsers treat them the same?
If not, how do you know someone hasn't got a DV certificate for this specific visit?
Scott Helme has a thorough takedown of them, and that was 7 years ago when they were still a thing.
https://scotthelme.co.uk/are-ev-certificates-worth-the-paper...
EV and OV when it includes dns names still requires domain control validation anyway.
EV certs are generally manually verified. This means there’s a human factor in the middle of this process. DV certs can, and should, be fully automated.
Multi perspective validation is about to be required too: https://cabforum.org/2024/11/07/ballot-smc010-introduction-o...