NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
OpenSSL 4.0.0 (github.com)
capitol_ 4 hours ago [-]
Finally encrypted client hello support \o/
bombcar 3 hours ago [-]
Is this something that we can enable "today" or is it going to take 12 years for browsers and servers to support?
kro 3 hours ago [-]
Nginx mainline 1.29.x supports it. So once you get that and also the openssl version on your system, good to go. Likely too late for ubuntu 26.04, maybe in debian 14 next year, or of course rolling release distros / containers.

But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there. The real benefits are on huge cloud hosting platforms.

Bender 2 hours ago [-]
FWIW Nginx 1.30 [1] just released and supports it so most distributions will have support as soon as those responsible for builds and testing builds push it forward.

"Nginx 1.30 incorporates all of the changes from the Nginx 1.29.x mainline branch to provide a lot of new functionality like Multipath TCP (MPTCP)."

"Nginx 1.30 also adds HTTP/2 to backend and Encrypted Client Hello (ECH), sticky sessions support for upstreams, and the default proxy HTTP version being set to HTTP/1.1 with Keep-Alive enabled."

But, in a personal/single website server, ech does not really add privacy, adversaries can still observe the IP metadata and compare what's hosted there

I don't quite follow. I have dozens of throw-away silly hobby domains. I can use any of them as the outer-SNI. How is someone observing the traffic going to know the inner-SNI domain unless someone builds a massive database of all known inner+outer combinations which can be changed on a whim? ECH requires DOH so unless the ISP has tricked the user into using their DOH end-point they can't see the HTTPS resource record.

[1] - https://news.ycombinator.com/item?id=47770007

ameliaquining 23 minutes ago [-]
It's not that adversaries can directly see the domain name; this doesn't have anything to do with domain fronting. The issue is that ECH doesn't hide the server's IP address, so it's mostly useless for privacy if that IP address uniquely identifies that server. The situation where it helps is if the server shares that IP address with lots of other people, i.e., if it's behind a big cloud CDN that supports ECH (AFAIK that's currently just Cloudflare). But if that's the case, it doesn't matter whether Nginx or whatever other web server you run supports ECH, because your users' TLS negotiations aren't with that server, they're with Cloudflare.
arcfour 3 hours ago [-]
CloudFlare has supported it since 2023: https://blog.cloudflare.com/announcing-encrypted-client-hell... Firefox has had it enabled by default since version 119: https://support.mozilla.org/en-US/kb/faq-encrypted-client-he... so you can use it today.
bombcar 3 hours ago [-]
https://tls-ech.dev indicates that Safari doesn't support it, but Chrome does.
altairprime 2 hours ago [-]
That’s likely due to iOS/macOS not supporting it in production-default-enabled yet; there’s an experimental opt-in flag at the OS level, but Safari apparently hasn’t (yet) added a dev feature switch for it.

https://developer.apple.com/documentation/security/sec_proto...

Presumably anyone besides Safari can opt-in to that testing today, but I wouldn’t ship it worldwide and expect nice outcomes until (I suspect) after this fall’s 27 releases. Maybe someone could PR the WebKit team to add that feature flag in the meantime?

ekr____ 42 minutes ago [-]
Even if the browsers and servers don't support it, you could still enable it because the system is designed to be backward compatible.
tialaramex 1 hours ago [-]
TLS (the IETF Working Group not the protocol family named for them) have long experience with the fact that if you specify how B is compatible with A based on how you specified A and ship B what you did won't work because the middleboxes are all cost optimized and don't implement what you specified but instead whatever got the sale for the least investment.

So e.g. they'd work for exactly the way you use say TLS 1.0 in the Netscape 4 web browser which was popular when the middlebox was first marketed, or maybe they cope with exactly the features used in Safari but since Safari never sets this bit flag here they reject all connections with that flag.

What TLS learned is summarized as "have one joint and keep it well oiled" and they invented a technique to provide that oiling for one working joint in TLS, GREASE, Generate Random Extensions And Sustain Extensibility. The idea of GREASE is, if a popular client (say, the Chrome web browser) just insists on uttering random nonsense extensions then to survive in the world where that happens you must not freak out when there are extensions you do not understand. If your middlebox firmware freaks out when seeing this happen, your customers say "This middlebox I bought last week is broken, I want my money back" so you have to spend a few cents more to never do that.

But, since random nonsense is now OK, we can ship a new feature and the middleboxes won't freak out, so long as our feature looks similar enough to GREASE.

ECH achieves the same idea, when a participating client connects to a server which does not support ECH as far as it knows, it acts exactly the same as it would for ECH except, since it has neither a "real" name to hide nor a key to encrypt that name it fills the space where those would fit with random gibberish. As a server, you get this ECH extension you don't understand, and it is filled with random gibberish you also don't understand, this seems fine because you didn't understand any of it (or maybe you've switched it off, either way it's not relevant to you).

But for a middlebox this ensures they can't tell whether you're doing ECH. So, either they reject every client which could do ECH, which again that's how you get a bunch of angry customers, or, they accept such clients and so ECH works.

ocdtrekkie 3 hours ago [-]
Just be aware any reasonable network will block this.
Bender 2 hours ago [-]
Just be aware any reasonable network will block this.

Russia blocked it for Cloudflare because the outer SNI was obviously just for ECH but that won't stop anyone from using generic or throw-away domains as the outer SNI. As for reasonable I don't quite follow. Only censorious countries or ISP's would do such a thing.

I can foresee Firewall vendors possibly adding a category for known outer-SNI domains used for ECH but at some point that list would be quite cumbersome and may run into the same problems as blocking CDN IP addresses.

quantummagic 2 hours ago [-]
Why is it "reasonable" to block it?
vman81 2 hours ago [-]
Well, I may want to have a say in what websites the employees at work access in their browsers. For example.
altairprime 2 hours ago [-]
That’s not a meaningful issue here. Either snoop competently or snoop wire traffic, pick one.

In the snooping-mandatory scenario, either you have a mandatory outbound PAC with SSL-terminating proxy that either refuses CONNECT traffic or only allows that which it can root CA mitm, or you have a self-signed root CA mitm’ing all encrypted connections it recognizes. The former will continue functioning just fine with no issues at providing that; the latter will likely already be having issues with certificate-pinned apps and operating system components, not to mention likely being completely unaware of 80/udp, and should be scheduled for replacement by a solution that’s actually effective during your next capital budgeting interval.

kccqzy 2 hours ago [-]
That’s usually done not on the network side but through the device itself. Think MDM and endpoint management.
ocdtrekkie 2 hours ago [-]
A good solution is tackling it on both. At work we have network level firewalls with separate policies for internal and guest networks, and our managed PCs sync a filter policy as well (through primarily for when those devices are not on our network). The network level is more efficient, easier to manage and troubleshoot, and works on appliances, rogue hardware, and other things that happen not to have client management.
ekr____ 41 minutes ago [-]
Well, if you have MDM you should be able to just disable ECH.
ocdtrekkie 34 minutes ago [-]
This is also indeed done on both. Browser policies.
kstrauser 48 minutes ago [-]
Once upon a time, "reasonable networks" blocked ICMP, too.

They were wrong then, of course, and they're still wrong now.

ocdtrekkie 43 minutes ago [-]
Once upon a time, like today? ICMP is most definitely only allowed situationally through firewalls today.
tredre3 9 minutes ago [-]
I'd say that ICMP is only situationally blocked by firewalls, not the other way around.

Because I can ping almost any public server on the internet and they will reply. I can ping your website just fine and it replies to me!

hypeatei 2 hours ago [-]
Procrastinators. FTFY.

Eventually these blocks won't be viable when big sites only support ECH. It's a stopgap solution that's delaying the inevitable death of SNI filtering.

ocdtrekkie 2 hours ago [-]
This will never happen. Because between enterprise networks and countries with laws, ECH will end up blocked a lot of places.

Big sites care about money more than your privacy, and forcing ECH is bad business.

And sure, kill SNI filtering, most places that block ECH will be happy to require DPI instead, while you're busy shooting yourself in the foot. I don't want to see all of the data you transmit to every web provider over my networks, but if you remove SNI, I really don't have another option.

hypeatei 51 minutes ago [-]
> Because between enterprise networks

> require DPI

Enterprises own the device that I'm connected to the network with, I don't see how you can get any more invasive than that.

> countries with laws

1) what countries do national-level SNI filtering, and 2) why are you using a hyptothetical authoritarian, privacy invading state actor as a good reason to keep plaintext SNI?

> Big sites care about money

Yes, and you could say that overbearing, antiquated network operators stop them from making more money with things like SNI filtering.

caycep 3 hours ago [-]
How is OpenSSl these days? I vaguely remember the big ruckus a while back, was it Heartbleed? where everyone to their horror realized it was maybe 1 or 2 people trying to maintain OpenSSL, and the OpenBSD people then throwing manpower at it to clear up a lot of old outstanding bugs. It seems like it is on firmer/more organized footing these days?
tptacek 2 hours ago [-]
The security side of OpenSSL improved significantly since Heartbleed, which was a galvanizing moment for the maintenance practices of the project. It doesn't hurt that OpenSSL is now one of the most actively researched software security targets on the Internet.

The software quality side of OpenSSL paradoxically probably regressed since Heartbleed: there's a rough consensus that the design of OpenSSL 3.0 was a major step backwards, not least for performance, and more than one large project (but most notably pyca/cryptography) is actively considering moving away from OpenSSL entirely as a result. Again: while security concerns might be an ancillary issue in those potential migrations, the core issue is just that OpenSSL sucks to work with now.

ImJasonH 1 hours ago [-]
On this topic, there was a great episode of a little-known podcast about Python cryptography and OpenSSL that was really eye opening: https://securitycryptographywhatever.buzzsprout.com/1822302/...

:)

dadrian 3 minutes ago [-]
I dunno, they'll let anybody get on the Internet and start a podcast.
kccqzy 3 hours ago [-]
It’s still terrible. There was a brief period immediately after Heartbleed that it was rapidly improving but the entire OpenSSL 3 was a huge disappointment to anyone who cared about performance and complexity and developer experience (ergonomics). Core operations in OpenSSL 3 are still much much slower than in OpenSSL 1.1.1.

The HAProxy people wrote a very good blog post on the state of SSL stacks: https://www.haproxy.com/blog/state-of-ssl-stacks And the Python cryptography people wrote an even more damning indictment: https://cryptography.io/en/latest/statements/state-of-openss...

Here are some juicy quotes:

> With OpenSSL 3.0, an important goal was apparently to make the library much more dynamic, with a lot of previously constant elements (e.g., algorithm identifiers, etc.) becoming dynamic and having to be looked up in a list instead of being fixed at compile-time. Since the new design allows anyone to update that list at runtime, locks were placed everywhere when accessing the list to ensure consistency.

> After everything imaginable was done, the performance of OpenSSL 3.x remains highly inferior to that of OpenSSL 1.1.1. The ratio is hard to predict, as it depends heavily on the workload, but losses from 10% to 99% were reported.

> OpenSSL 3 started the process of substantially changing its APIs — it introduced OSSL_PARAM and has been using those for all new API surfaces (including those for post-quantum cryptographic algorithms). In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. This reduces performance, reduces compile-time verification, increases verbosity, and makes code less readable.

selfmodruntime 3 minutes ago [-]
There are little other options. `Ring` is not for production use. WolfSSL lags behind in features a bit. BoringSSL and AWS-LC are the best we have.
gavinray 2 hours ago [-]

  > In short, OSSL_PARAM works by passing arrays of key-value pairs to functions, instead of normal argument passing. 
Ah yes, the ole' " fn(args: Map<String, Any>)" approach. Highly auditable, and Very Safe.
wahern 2 hours ago [-]
I think one of the main motivators was supporting the new module framework that replaced engines. The FIPS module specifically is OpenSSL's gravy train, and at the time the FIPS certification and compliance mandate effectively required the ability to maintain ABI compatibility of a compiled FIPS module across multiple major OpenSSL releases, so end users could easily upgrade OpenSSL for bug fixes and otherwise stay current. But OpenSSL also didn't want that ability to inhibit evolution of its internal and external APIs and ABIs.

Though, while the binary certification issue nominally remains, there's much more wiggle room today when it comes to compliance and auditing. You can typically maintain compliance when using modules built from updated sources of a previously certified module, and which are in the pipeline for re-certification. So the ABI dilemma is arguably less onerous today than it was when the OSSL_PARAM architecture took shape. Today, like with Go, you can lean on process, i.e. constant cycling of the implementation through the certification pipeline, more than technical solutions. The real unforced error was committing to OSSL_PARAMs for the public application APIs, letting the backend design choices (flexibility, etc) bleed through to the frontend. The temptation is understandable, but the ergonomics are horrible. I think performance problems are less a consequence of OSSL_PARAMS, per se, but about the architecture of state management between the library and module contexts.

georgthegreat 3 hours ago [-]
https://www.haproxy.com/blog/state-of-ssl-stacks

According to this one should not be using v3 at all..

danudey 3 hours ago [-]
Nice that OpenSSL finally relented and provided an API for developers to use to implement QUIC support - last year, apparently.

For those not familiar: until OpenSSL 3.4.1, if you wanted use OpenSSL and wanted to implement HTTP/3, which uses QUIC as the underlying protocol, you had to use their entire QUIC stack; you couldn't have a QUIC implementation and only use OpenSSL for the encryption parts.

QUIC, for those not familiar, is basically "what if we re-implemented TCP's functionality on top of UDP, but we could throw out all the old legacy crap". Complicated but interesting, except that if OpenSSL's implementation didn't do what you want or didn't do it well, you either had to put up with it or go use some other SSL library somewhere else. That meant that if you were using e.g. curl built against OpenSSL then curl also inherently had to use OpenSSL's QUIC implementation even if there were better ones available.

Daniel Stenberg from Curl wrote a great blog post about how bad and dumb that was if anyone is interested. https://daniel.haxx.se/blog/2026/01/17/more-http-3-focus-one...

rwmj 3 hours ago [-]
Compared to OpenSSL 3 this transition has been very smooth. Only dropping of "Engines" was a problem at all, and in Fedora most of those dependencies have been changed.
yjftsjthsd-h 4 hours ago [-]
As a complete non-expert:

On the one hand, looks like decent cleanup. (IIRC, engines in particular will not be missed).

On the other hand, breaking compatibility is always a tradeoff, and I still remember 3.x being... not universally loved.

moralestapia 4 hours ago [-]
That's why it is version 4.
2 hours ago [-]
pixel_popping 47 minutes ago [-]
Mythos is coming for yaaaaa (just kidding).
bensyverson 3 hours ago [-]
I just updated to 3.5x to get pq support. Anything that might tempt me to upgrade to 4.0?
altairprime 3 hours ago [-]
The top feature, “ Support for Encrypted Client Hello (ECH, RFC 9849)”, is of prime importance to those operating Internet-accessible servers, or clients; hopefully your Postgres server is not one such!
bensyverson 30 minutes ago [-]
It's a web server (pg / post-quantum, not pg / Postgres), but that's a great feature!
jmclnx 3 hours ago [-]
I wonder how hard it is to move from 3.x to 4.0.0 ?

From what I remember hearing, the move from 2 to 3 was hard.

georgthegreat 3 hours ago [-]
That's because there was no version 2...
some_furry 3 hours ago [-]
Yes there was!

But, thousand yard stare it was the version for the FIPS patches to 1.0.2.

ge96 4 hours ago [-]
Just in time for the suckerpinch video
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 22:23:23 GMT+0000 (Coordinated Universal Time) with Vercel.