NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Go HTTPS Servers with TLS (eli.thegreenplace.net)
tialaramex 1102 days ago [-]
> Since the certificate carries the bank's legitimate public key, when you use it to generate your shared secret only the bank will be able to decrypt it.

I appreciate it isn't central to the article's purpose, but this is wrong, and as background noise it contributes to a misunderstanding that causes other problems every time it occurs.

I will describe what's done in TLS 1.3 because that's clearer, but you should know this isn't how your TLS 1.2 server actually works in real use today either.

TLS encryption doesn't care about these certificates. (Elliptic curve) Diffie-Hellman is used to agree random keys, and the random keys encrypt everything. In TLS 1.3 nobody even says anything about certificates before that step happens - the certificates are sent encrypted.

So why have certificates at all? Because without them you're having a secure conversation... but you don't know who you are having a conversation with. So now we're definitely communicating securely with someone let's check it's who we expected. In TLS 1.3 the server (and optionally client) can send a certificate, claiming some identity, and then they sign the transcript (a record of everything said so far) with the corresponding private key, in effect saying "This is proof you are having this conversation with me".

NovemberWhiskey 1102 days ago [-]
That seems just a little bit picky. The author of the article already points out that the purpose of the certificates is to avoid the man-in-the-middle attack.

It is also, factually, a description of how RSA key agreement works.

tialaramex 1102 days ago [-]
Firstly, if we're being picky it isn't factually a description of RSA key agreement. For RSA kex the shared secret is just picked by the client (hopefully at random) and then it's encrypted with that RSA public key for transmission in order to achieve the implied authentication. You don't use the public key to generate the shared secret as this quote suggests.

More importantly, nobody still does this. If you're writing new HTTPS Servers, in Go, in 2021, your only reason to even implement RSA kex is that it's technically still Mandatory To Implement for TLS 1.2 and so it's a compliance consideration. You definitely shouldn't have clients that still think RSA kex is a good idea.

The reason I care though isn't about being technically correct, it's because people get the idea that this sort of 1990s hybrid crypto is really how things work, and then they're confused because of course that's not how anything they actually use works. At best this means TLS gets a reputation for being more mysterious or complicated than it really is, and at worst we get cargo cult solutions that make "sense" based on the misunderstanding but alas aren't actually secure in practice.

0xEFF 1102 days ago [-]
Fair though, today I learned getting rid of RSA and using opinionated DH parameters is a significant aspect of what defines TLS 1.3.
eliben 1101 days ago [-]
Thank you for the comment! I will revise that part of the post to avoid confusion.
bogomipz 1102 days ago [-]
But even in Diffie-Helman the DH parameters will be signed with with server's RSA key. This is how the client can be sure who it is negotiating with. As such the bank analogy for public key crypto is still worthwhile.

Also your comment seems like an odd nitpick given the author states in the first paragraph:

>"I won't be covering how the protocol itself works in detail here ..."

And in the sentence before that they also linked to previous articles they've published on both DH and RSA which go heavily into the details of how each works. It's seems to me like they actually were being quite careful to avoid the very thing you are being critical of even though it was obvious that it wasn't the point in the article.

cdogl 1103 days ago [-]
Nice reference piece.

I personally have rarely found the need to handle TLS in my application code; even in a cloud-native environment where you want to secure traffic inside your network, I prefer to do TLS termination in another process (in Kubernetes land this would be a sidecar container in a pod), because {nginx,envoy,caddy,etc} have great TLS support and there are strong standard conventions for how to configure them with the key. It's often much easier for someone to come along and figure figure out how you're configuring a standard HTTP reverse proxy than to figure out some bespoke application configuration when a TLS key needs to be rotated.

This is probably a little naive and I suspect that some highly latency-sensitive applications benefit from the overhead of that additional hop (even if it's over localhost), but I think offloading the TLS workload as its own concern has many benefits.

idoubtit 1103 days ago [-]
In order to mitigate the latency of using nginx as a reverse-proxy, I recently added a feature to a Haskell application: instead of listening on a TCP port, it can listen on a unix socket. It's not only faster, it also avoids the congestion problems of TCP, and adds security (unix user and group permissions).
tialaramex 1102 days ago [-]
This isn't very Zero Trust, it creates a delicious soft centre where there's not even encryption if an adversary can get there.

Likely in a small toy system this is all just binaries running inside a VM like the K8s setup you describe - and you could reason that any adversary who is running inside your VM has won already, but on bigger systems this design increasingly makes it likely production will end up with the unencrypted bits moving over at least some sort of internal network.

I've built things that worked that way, but I've also built things that deliberately did their own TLS, and I know for sure which I'd have confidence in to protect valuable data.

NovemberWhiskey 1102 days ago [-]
There are a lot of factors.

One thing is your compute environment; if there's a really solid way to assure that the connection between your proxy and your service implementation remains secure then that's one thing - however, outside of K8s (or similar) that may not be easy or may be bespoke, and there's a configuration overhead of assuring that proxies run the right place, bind the right ports, expose them only to the right places and so on.

Even assuming a good orchestrator, there is that extra degree of assurance in knowing that TLS is actually being terminated in your process.

Public CA PKI vs. enterprise PKI also makes a difference; it's easy enough to integrate with LetsEncrypt or whatever using Nginx but your enterprise PKI might be more complex and having SDKs for direct integration into service code might actually be easier. In a large environment, or one where certificates are short-lived, the operational aspect dominates.

Level of cloud buy-in is also a factor; for example, if you're deep in AWS then you're probably already hosting behind an AWS load balancer of some sort that knows how to terminate TLS and also knows how to integrate with ACM for automated renewals.

Finally, if you need client authentication certificates then you might find your proxy implementation just passes them on to you as headers (or perhaps just does chain validation), which leaves you holding the bag for the rest of that in your service anyway.

debarshri 1103 days ago [-]
I have alot seen IoT devices with embedded certificates handled via application code. In these situations certificate rotation is a full firmware update. Also, places where there is resource constraints to run sidecar. Sidecars, proxies makes quite some sense in cloud setting.
cdogl 1103 days ago [-]
Fair point. I see everything through the lens of my cloud / SaaS niche but Go is now used far more broadly than that.
kitd 1103 days ago [-]
You're correct from the PoV of the server, but it is still very useful to have the capability built into the standard library for those clients that need to use TLS.
1vuio0pswjnm7 1103 days ago [-]
I adopted this thinking some time ago. haproxy, stunnel and, to a lesser extent, sslsplit are ones I would add to the list. Letting these programs deal with TLS allows me to use so much more networking software that either will never be TLS-capable or which requires too much attention to constantly monitor whether the developers are using the libraries properly and keeping up with the latest versions.
1vuio0pswjnm7 1102 days ago [-]
As an end user, I MITM TLS so I can see and, if necessary, modify what is being sent from applications. TLS-enabled servers are bound to the loopback. I must have control over what is being sent. If an application denies the user control over what is sent then I will exercise control outside the application. Applications only communicate with the loopback, not the network.
spacemanmatt 1103 days ago [-]
I came here to muse on basically the same question. Nginx and other TLS termination options seem way more robust than any library I'd write an app server in. After all, this is intended for security, so the dev-op in me dictates: Use a product that is close to bulletproof as possible.
bombcar 1102 days ago [-]
It also allows security and other patches even after the application has been abandoned or otherwise ignored (much more important for non-open source applications).
Cthulhu_ 1103 days ago [-]
Same; on the one side my Go app will be running as its own webserver, but on the other hand it'll be years before it's ready to run standalone (it's replacing a legacy app); until then it'll be running behind an Apache webserver which also takes care of HTTPS.
londons_explore 1102 days ago [-]
Why can't someone make TLS self-configuring??

I want to just run a server somewhere (my laptop, kubernetes, some virtual machine) and have any domain which points to that VM auto-create a TLS certificate.

I challenge anyone to propose a way to register and set up "mydomain.com" (https serving some static files) with opensource software without having to type "mydomain" at least 3 times!

tialaramex 1102 days ago [-]
As illustrated by several of the replies, people can do this, particularly using the ACME standard. In fact it's slightly surprising to me that many more web server applications don't do this yet, I had sort of expected that Microsoft for example would ship an IIS that just magically works, maybe supplied out of box with a coupon for one certificate per Windows Server license or something from a for-profit Certificate Authority, and maybe then you either change a registry value to use Let's Encrypt or you pay them $10 per cert for the out-of-box behaviour. Or that services that are intended to run as a web server, like Jenkins or Bugzilla would do all this for you.

But you won't be able to solve the general problem without some other element. The current iteration of the Ten Blessed Methods makes it very easy to have an HTTP or HTTPS server that arranges for its own certificates with ACME, because they assume that if you get to run web servers (on port 80 and/or 443) on some.name.example that's because you control some.name.example or you've got permission from whoever does control it to do whatever you want.

It's trickier for a mail server. You may genuinely run the MX for some.name.example but if you don't control DNS for that name, nor run the web servers, you can't pass ACME challenges. In the case of a mail server you could automate a hypothetical ACME challenge that relies on say 3.2.2.4.4 Constructed Email to Domain Contact but I don't think anybody actually offers that today.

It's completely impossible (today, on your own) for some other random TLS service. If you run the TLS-enabled IRC server on some.name.example you can't get certificates without help from the person who really controls the some.name.example FQDN. They need to either let you run an HTTP server, control their DNS records or just hand out working certificates to you somehow.

jsjsbdkj 1102 days ago [-]
ACME HTTP challenges (Let's Encrypt, for example) literally do this. Your web server gets a challenge from the issuer, serves a special page with the response to validate that you own the domain, and the issuer gives you a cert for that domain.
londons_explore 1102 days ago [-]
But it turns out no server I have seen is as simple as "apt install server".

You always have to paste in 100 lines of boilerplate into some hidden config file.

francislavoie 1102 days ago [-]
Like I replied in a sibling thread, Caddy can do exactly this.

A few lines to install the debian repo (with systemd service): https://caddyserver.com/docs/install#debian-ubuntu-raspbian

Then your /etc/caddy/Caddyfile just looks like this (assuming you have an app listening on port 8080 that you want to proxy to):

    example.com
    reverse_proxy localhost:8080
Yer done. Automatic HTTPS for "example.com", all you need to do is point your DNS to your server, make sure ports 80 and 443 are accessible, and Caddy does the rest.
francislavoie 1102 days ago [-]
That's exactly what Caddy does! https://caddyserver.com
benlivengood 1102 days ago [-]
Cert-manager works with at least Ambassador to do this, but basically any web server that can read a tls certificate from a Kubernetes secret should work with cert-manager.

Beyond that, though, there really isn't a solid "turn on TLS" option because security is hard and TLS is not a panacea. It ostensibly encourages that a client on the Internet will only send data to a domain name that currently has a matching TLS certificate (practically relying on an intricate web of trust between DNS and PKI), but that isn't even a guarantee. Web servers often still listen on port 80, clients can be forced to ignore certificate errors, and TLS does zero client-side authentication or authorization by default.

GauntletWizard 1102 days ago [-]
I am working towards this goal, sort of: I believe that TLS should be self-configuring. I've built a spec and some tools around automatically issuing TLS certificates automatically in a well-defined place in Kubernetes clusters. You then call my init code to start up your services - Exactly like you'd call http.ListenAndServe(), you call KubeTLS.ListenAndServe(). The code is simple enough, and there's a little script you run on your development machine to create a self-signed cert locally.

https://gitlab.com/gauntletwizard_net/kubetls

fjni 1102 days ago [-]
makeworld 1102 days ago [-]
Here's one that's actually ready to use: https://github.com/caddyserver/certmagic
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 00:33:41 GMT+0000 (Coordinated Universal Time) with Vercel.