NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Just want simple TLS for your .internal network? (github.com)
8organicbits 9 hours ago [-]
A word of warning, client side support of name constraints may still be incomplete. I know it works on modern Firefox and Chrome, but there's lots of other software that uses HTTPS.

This repo links to BetterTLS, which previously audited name constraint support, but BetterTLS only checked name constraint support at the intermediary certificates not at the trust anchors. I reported[1] the oversight a year back, but Netflix hasn't re-engineered the tests.

Knowing how widely adopted name constraints are on the client side would be really useful, but I haven't seen a sound caniuse style analysis.

Personally, I think the public CA route is better and I built a site that explores this[2].

[1] https://github.com/Netflix/bettertls/issues/19

[2] https://www.getlocalcert.net/

ndsipa_pomu 13 hours ago [-]
I prefer to assign an external name to an internal device and grab a free SSL cert from LetsEncrypt, but using DNS challenge instead as internal IP addresses aren't reachable by their servers.
lolinder 9 hours ago [-]
Yep. I tried the custom-root-CA approach for a long time, but there were just too many problems with it:

* Loading it into every device was more work than it sounds. We have Android, iOS, Mac, Windows, and Linux, all of which have their own rules.

* Even once loaded, some applications come with their own set of root CAs. Some of those have a custom way of adding a new one (Firefox), others you just had to accept the invalid cert each time, and still others just refused to work.

* I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.

In the end I settled on a DNS-challenge wildcard SSL cert loaded into Caddy, with Caddy terminating TLS for everything that's on my home server. It's way simpler to configure the single server (or even 2-3 servers) than every single client.

DidYaWipe 5 minutes ago [-]
I've used this method for development successfully (generating CAs and certs on Mac with mkcert), but Apple has broken certificates in iOS 18. Root CAs are not showing up in the trust UI on iPhones after you install them. It's a big issue for developers, and has broken some people's E-mail setups as well. Also some internal software deployments.

Apple is aware of it, but it's still not fixed in iOS 18.1.

gh02t 5 hours ago [-]
> I deploy my self-hosted stuff with Docker, which means that not only does each device need to have the root CA added to it but every Docker image that talks to the internal network needs to have it as well. This ends up being a mix of the previous two problems, as I now have to figure out how to mount the CA on an eclectic bunch of distros and I often then have to figure out why the dockerized application isn't using the CA.

FWIW, I solve this problem with wildcards + a central reverse proxy for containerized apps. I host most services on a subdomain of the machine that hosts containers, like "xxx.container.internal", "xxx2.container.internal", etc. Instead of each container doing it's own SSL I have one central reverse proxy container that binds to 443 and each app container gets put on an internal Docker network with the reverse proxy. Reverse proxy has a wildcard certificate for the host system domain name "*.container.internal" and you can just add an endpoint for each service SNI. I'm using Zoraxy, which makes it very easy to just add a new endpoint if I install a new app with a couple clicks, but this works with lots of other reverse proxies like Caddy, Nginx, etc. If containers need to talk to each other over the external endpoint for some reason and thus need the root CA you can mount the host system's certificate store into the container, which seems to work pretty well the one or two times I needed to do it.

I haven't really solved the annoyance of deploying my root CA to all the devices that need it, which truly is a clusterfuck, but I only have to do it once a year so it isn't that bad. Very open to suggestions if people have good ways to automate this, especially in a general way that can cover Windows/Mac/iOS/Android/various Linuxes uniformly since I have a lot of devices. I've experimented with Ansible, but that doesn't cover mobile devices, which are the ones that make it most difficult.

poincaredisk 7 hours ago [-]
Historically, before wildcard certificates were suddenly available for free, this leaked all internal domains to the internet, but now it's mostly a solved problem.
peanut-walrus 18 minutes ago [-]
So what? Do you keep secrets in your domain names?
vegardx 6 hours ago [-]
I don't understand why that is such a huge problem. The alternatives have much more severe problems, all from reusing a wildcard in many places to running your own PKI.
NovemberWhiskey 6 hours ago [-]
It depends on your risk profile, but there are definitely people who'd rather run their own PKI than permit threat actor reconnaissance by publishing internal hostnames to CT logs.
vegardx 2 hours ago [-]
When this information is useful you've either got fundamental security related issues that needs to be addressed long before this, or you're dealing with threat actors with significant capabilities. In the latter case you've probably already taking this into account when you're creating your stuff, or you have the capability and technical understanding to know how to properly roll out your own PKI.

The overlap of people that suggest that you either run your own PKI or just distribute a wildcard certificate and have the technical understanding on how to do this in a secure way is minuscule. The rest of those people are probably better off using something like Lets Encrypt.

tbhb 6 hours ago [-]
These are exactly the challenges and toil I ran into over time with my self-hosted/homelab setup. I use regular domains now as well with DNS challenges for Let's Encrypt. I've been experimenting lately with CloudFlare Tunnel + Zero Trust Access as well for exposing only the endpoints I need from an application for local development like webhooks, with the rest of the site locked behind Access.
0x457 4 hours ago [-]
I used to run wildcard cert with DNS challenge from LE with CloudFlare Tunnel to expose internal server to interwebs.

I have since then switched to ubiquiti products, and now I just run wireguard server for my road-warrior devices. Would use CloudFlare Tunnel if I ever need to expose anything publically.

from-nibly 9 hours ago [-]
Don't forget chromecast, roku, fire stick, smart TV, and all the other bs.
DandyDev 13 hours ago [-]
I do this as well, but be aware that these external names you're using for internal devices become a matter of public record this way. If that's okay for you (it is for me), then this is a good solution. The advantage is also that you run no risk of name clashes because you actually own the domain
magicalhippo 13 hours ago [-]
I decided to try split DNS to avoid leaking the internal IPs, but it turned out a bit more fragile than I imagined.

Especially Android is finicky, ignoring your DNS server if it doesn't like your setup. For example, if it gets an IPv6 address, it requires the DNS server to also have an IPv6 address, or it'll use Google's DNS servers.

It works now but I'm not convinced it's worth it for me.

capitol_ 9 hours ago [-]
Split DNS causes lots headaches, it also makes it really hard to root cause analysis of failures when they involve DNS.
Hamuko 12 hours ago [-]
I use CNAME records and it works on everything except Windows, where it works sometimes.

Basically, CNAME record from service.myserver.com to myserver.internal on a public DNS server, A record from myserver.internal to 1.2.3.4 on private DNS server.

I think I could maybe get it working on Windows too by tweaking the TTLs. Currently both DNS servers are automatically setting the TTL and I think Windows freaks out about that.

ebb_earl_co 8 hours ago [-]
This seems like a good technique. What DNS software do you use?
Hamuko 8 hours ago [-]
I just use the one built into my UniFi router. Public DNS side is Cloudflare, which allows easy DNS validation for Let's Encrypt.
xfer 13 hours ago [-]
Or use a wildcard cert for all internal certs.
pridkett 11 hours ago [-]
This is exactly what I do. After seeing how much of my internal network was exposed in certificate transparency logs, I noped out and just do a DNS challenge for a wildcard for almost everything.

Now it’s have a nice script that distributes my key automatically to 20 or so hosts and apps and have a real SSL cert on everything from my UDM Pro to my Synology to random Raspberry Pis running containers. Most of which have domain names that only resolve on my local network.

This is made possible by a fairly robust DNS setup that consists of not only giving A records to all my hosts automatically, but also adding in CNAMEs for services and blocking almost all outbound DNS, DNS over TLS, DoH, etc.

dopp0 7 hours ago [-]
> fairly robust DNS setup that consists of not only giving A records to all my hosts

looks nice, can you give more details on this? tks!

ndsipa_pomu 12 hours ago [-]
That could be a good idea, though it means that the certificate/key has to be well guarded.
project2501a 12 hours ago [-]
Please don't. Technical debt accumulates by force of practice.
qwertox 12 hours ago [-]
It's working good for me. My technical debt is to always make sure that I'm able to renew a certificate and that the distribution occurs successfully.

I don't see how other solutions are less problematic.

Eikon 12 hours ago [-]
ndsipa_pomu 12 hours ago [-]
> be aware that these external names you're using for internal devices become a matter of public record this way

Yes, I sometimes think about that, but have come to the conclusion that it's not likely to make any difference. If someone is trying to infiltrate my home network, then it's not going to really help them to know internal IP addresses as by the time they get to use them, they're already in.

dspillett 11 hours ago [-]
> If someone is trying to infiltrate my home network

I don't think the publishing of host names was mentioned as a concern for small home networks, but more for larger organisations that might be subject to a coordinated break-in or simply have trade secrets¹² that might be hinted at by careless naming of resources.

----

[1] Their next big product/enhancement, as yet unannounced even within the company, for instance.

[2] Hmm, checking what is recorded against one of DayJob's domains I see clues as to who some of our clients are. Not really a significant issue for security at all, but I know at least some of our contracts say we shouldn't openly talk about that we provide services to that client³ so I'll drop a message to the ISC to suggest we discuss if we need to care about the matter…

[3] Though that is mostly in the form of not using their logos in our advertising and such.

xena 10 hours ago [-]
That's why you have an internal domain that's not related to the company. Something like "packet-flinging.ninja". Everything's a tradeoff though.
deltaburnt 8 hours ago [-]
It seems really easy to associate a company with its internal domain though? Unless the company treats it as a secret only known between machines.
dspillett 3 hours ago [-]
> It seems really easy to associate a company with its internal domain though?

Especially as people will talk about it as a “you'll never guess what…” when talking about places they work or have worked.

qwertox 12 hours ago [-]
You don't need to publish the IP addresses publicly if you use an internal DNS server. I think even Pi-hole could do this.
prmoustache 12 hours ago [-]
you can use a wildcard of type *.internal.example.com or use names that do not relate to the service name if you want to obfuscate the tech stack used.

The only thing public is that you may have an internal network with nodes.

12 hours ago [-]
djhworld 12 hours ago [-]
I last looked at LetsEncrypt maybe 8-9 years ago, I thought it was awesome but not suitable for my internal stuff due to the http challenge requirement, so I went down the self signed CA route and stuck with that, and didn’t really keep up with developments in the space

It was only until recently someone told me about the DNS challenge and I immediately ported everything over with a wildcard cert - its been great!

bmicraft 7 hours ago [-]
They've introduced the dns challenge almost 9 years ago, you must have barely missed it!
giobox 5 hours ago [-]
LetsEncrypt + DNS challenge + DNS provider with letsencrpyt compatible API for modifying records works fantastically well for getting "real" https/SSL working for private IP addresses, the automatic renewals make it largely set and forget with very little config or setup required.

I've had working validly signed SSL on literally all my private home self-hosted services and load-balancers internally for years this way.

It also easily switches to a production like setup if you later did decide to host something on the public internet.

bombcar 9 hours ago [-]
Just get a wildcard cert from letsencyrpt and copy it internally. Then you don’t even need to DNS leak names to LE.
thatcherc 3 hours ago [-]
This sounds like something I'd want to do! Is the idea that you'd have a public domain name like "internal.thatcherc.com" resolve to an internal IP address like 10.0.10.5? I've wondered about setting this up for some local services I have but I wasn't sure if it was a commonly-done thing.
AdamJacobMuller 3 hours ago [-]
I've been doing this for a year or two with k3s + cert-manager.

Works great.

In my case everything points to a tailscale operator endpoint, which goes to nginx ingress, which routes to the appropriate pods.

It's very much a set-and-forget solution.

wkat4242 2 hours ago [-]
Yeah that's what I do. If you use anything other than Cloudflare its really really hard to get the authentication plugins going on every different web server though. Every server supports a different subset of providers and usually you have to install the plugins separately. It's a bit of a nightmare. But once it's dialled in it's ok.

I didn't like this approach because I don't like to leak information about my internal setup but I found that you don't even have to register your servers on a public DNS so it's ok. Just the domain has to exist. It does create very temporary TXT records though.

candiddevmike 11 hours ago [-]
Obligatory if DNS validation is good enough, DANE should've been too. Yes, MITM things could potentially ensue on untrusted networks without DNSSEC, but that's perfect being the enemy of good territory IMO.

This would allow folks to have .internal with auto-discovered, decentralized, trusted PKI. It would also enable something like a DNSSEC on/off toggle switch for IoT devices to allow owners to MITM them and provide local functionality for their cloud services.

tptacek 4 hours ago [-]
DANE rollout was attempted. It didn't work reliably (middleboxes freak out about DNSSEC), slowed things down when it did, and didn't accomplish any security goals (even on its own terms) because it can't plausibly be deployed DANE-only on the modern Internet. Even when the DANE working group came up with a no-additional-RTTs model for it (stapling), it fell apart for security reasons (stripping). DANE is a dead letter.

It happens. I liked HPKP, which was also tried, and also failed.

8organicbits 7 hours ago [-]
This would be cool, but I think we're still a far way off from that being an option. DANE requires DNSSEC validation by the recursive resolver and a secure connection from the user's device to that resolver. DoH appears to be the leading approach for securing the connection between the user's device and the resolver, and modern browser support is pretty good, but the defaults in use today are not secure:

> It disables DoH when [...] a network tells Firefox not to use secure DNS. [1]

If we enabled DANE right now, then a malicious network could tell the browser to turn off DoH and to use a malicious DNS resolver. The malicious resolver could set the AD flag, so it would look like DNSSEC had been validated. They'd then be able to intercept traffic for all domains with DANE-validated TLS certificates. In contrast, it's difficult for an attacker to fraudulently obtain a TLS certificate from a public CA.

Even if we limit DANE to .internal domains, imagine connecting to a malicious network and loading webmail.internal. A malicious network would have no problem generating a DANE-validated TLS certificate to impersonate that domain.

[1] https://support.mozilla.org/en-US/kb/dns-over-https#w_defaul...

candiddevmike 6 hours ago [-]
I'll concede that DNSSEC is not in a good spot these days, but I don't know if that's really due to its design or lack of adoption (it's in similar territory as IPv6 TBH). DoH is (IMO) a poor workaround instead of "fixing" DNSSEC, but it's unfortunately the best way to get secure resolution today.

Putting aside the DNSSEC issues, IMO, DNS should be authoritative for everything. It's perfectly decentralized, and by purchasing a domain you prove ownership of it and shouldn't then need to work within more centralized services like Lets Encrypt/ACME to get a certificate (which seems to becoming more and more required for a web presence). A domain name and a routable IP should be all you need to host services/prove to users that domain.com is yours, and it's something I think we've lost sight of.

Yes, DANE can create security issues, your webmail example is a perfectly valid one. In those situations, you either accept the risk or use a different domain. Not allowing the behavior because of footguns never ends well with technology, and if you're smart enough to use .internal you should understand the risks of doing so.

Basically, we should let adults be adults on the internet and stop creating more centralization in the name of security, IMO.

tptacek 4 hours ago [-]
It is not in similar territory as IPv6. We live in a mixed IPv4/IPv6 world (with translation). IPv6 usage is steadily and markedly increasing. Without asking to be, I'm IPv6 (on ATT Fiber) right now. DNSSEC usage has actually declined in North America in the preceding 18 months, and its adoption is microscopic to begin with.

IPv6 is going to happen (albeit over a longer time horizon than its advocates hoped). DNSSEC has failed.

teddyh 2 hours ago [-]
> DNSSEC has failed.

This is the customary comment by me that this is far from the prevailing view. From my viewpoint, DNSSEC is steadily increasing, both in demand and in amount of domains signed.

tptacek 44 minutes ago [-]
Here's .COM and .NET:

https://www.verisign.com/en_US/company-information/verisign-...

Signed domains are increasing where they're done automatically by registrars; where the market has a say, use is declining --- sharply!

8organicbits 3 hours ago [-]
DANE without DNSSEC isn't a good idea. DoH secures the connection between the user's device and their recursive resolver, but it cannot secure the connection between the recursive resolver and the authoritative name servers. If you're using DANE you need a stronger guarantee that the records are valid.
ndsipa_pomu 10 hours ago [-]
I hadn't heard of DANE, so looked it up and found the wikipedia entry: https://en.wikipedia.org/wiki/DNS-based_Authentication_of_Na...

According to that, it's not supported by Chrome, nor Firefox.

tptacek 4 hours ago [-]
It was, once, and then got pulled.
teddyh 2 hours ago [-]
It’s customarily used for e-mail transport.
Tepix 10 hours ago [-]
How do you automate it?
ndsipa_pomu 8 hours ago [-]
I use Dynu.com as my DNS provider (they're cheap, provide APIs and very fast to update which is great for home IP addresses that may change). Then, to get the certificates, I use https://github.com/acmesh-official/acme.sh which is a shell script that supports multiple certificate and DNS providers. Copying the certificates to the relevant machines is done by a custom BASH script that runs the relevant acme.sh commands.

One advantage of DNS challenge is that it can be run anywhere (i.e. doesn't need to run on the webserver) - it just needs the relevant credentials to add a DNS TXT record. I've got my automation wrapped up into a Docker container.

globular-toast 9 hours ago [-]
Not OP but I have a couple of implementations: one using caddyserver[0] as a reverse proxy in a docker-compose set up, and the other is a Kubernetes cluster using cert-manager[1].

[0] https://caddyserver.com/ [1] https://cert-manager.io/

kassner 5 days ago [-]
From the linked Netflix article:

> The Trouble with Name Constraints

> The Name Constraints extension lives on the certificate of a CA but can’t actually constrain what a bad actor does with that CA’s private key

> Therefore, it is up to the TLS _client_ to verify that all constraints are satisfied

> However, as we extended our test suite beyond basic tests we rapidly began to lose confidence. We created a battery of test certificates which moved the subject name between the certificate’s subject common name and Subject Alternate Name extension, which mixed the use of Name Constraint whitelisting and blacklisting, and which used both DNS names and IP names in the constraint. The result was that every browser (except for Firefox, which showed a 100% pass rate) and every HTTPS client (such as Java, Node.JS, and Python) allowed some sort of Name Constraint bypass.

That’s the danger of any solution that requires trusting a self-signed CA. Better just trust the leaf certificate, maybe make it wildcard, so you only have to go through the trust-invalid-cert once?

nh2 4 days ago [-]
Note that statement is 7 years old, it prompted Netflix to make bettertls.com.

The situation has improved since then, see the linked https://news.ycombinator.com/item?id=37544094

11 hours ago [-]
chgs 13 hours ago [-]
I wa t ti be able to import a cert into by browser and specify what to trust it for myself. “Only trust this cert for domain.com” did example.

The name constraints can give me a hint what it’s designed for, but if I import a cert to MITM devsite.org, I don’t want that cert working for mybank.com.

nh2 5 days ago [-]
I did some research, write-up and scripting about the state of X.509 Name Constraints, so that people you give your CA cert to don't need to trust you not to MitM them on other domains.

Packaged into a convenient one-liner to create a wildcard cert under for the new .internal TLD.

Please scrutinize!

I use this to provide e.g. at home:

    https://octoprint.myhome.internal
    https://paperless.myhome.internal
to provide transport encryption of these services in the local WiFi.

Friends and family can add the CA root to their devices without having to worry about me MitM'ing their other connections.

vbezhenar 13 hours ago [-]
Is it possible to constrain existing CA?

For example my government uses non-standard CA and some websites rely on it. But importing CA obviously makes them able to issue google.com and MITM me if they want to. And they already tried, so trust is broken.

I imagine something like generating separate name-constrained certificate, sign existing CA with this name-constrained certificate (I think it's called cross-sign or something like that) and import things into OS, expecting that browser will use name-constraints of the "Root-Root" certificate. Could it work?

poincaredisk 6 hours ago [-]
Yes, I do it in my work to restrict my company CA to company servers [1]. You generate your own CA, and cross sign other cert with any constraint you want. It works great, but requires some setup, and of course now you have your own personal CA to worry about.

[1] Yes, company is ok with it, most of my team does it, and this makes everyone more secure. Win-win.

dfox 4 hours ago [-]
I assume that the mentioned “some setup” involve not only distributing the new root CA, but also somehow prepopulating the old cross-signed certificate, as the services know nothing about that and thus will not send it in their cert chain. Or am I overlooking something?
13 hours ago [-]
dinosaurdynasty 7 hours ago [-]
Namecoin has made utilities similar to this (in order to constrain all existing CAs from signing `.bit` domains) so I assume so.
ulfbert_inc 13 hours ago [-]
Niklas, if you are reading this - it was a pleasure to interview with you some 6 (or so) years ago :) thanks for the script and the research, I will make use of it.
nh2 42 minutes ago [-]
Thanks Vadzim, I remember -- happy to be useful :)
bluGill 9 hours ago [-]
Looks good, but I want to MitM my network. I want youtube.com to redirect to my internal server that only has a few approved videos. My kids do some nice piano lessons from youtube, but every time I let them they wait until I'm out of the room and then switch to something else. There are lots of other great educational videos on youtube, but also plenty to waste their time on. (I want this myself as well since I won't have ads on my internal youtube server - plus it will add an extra step and thus keep me from getting distracted to something that isn't a good use of my time to watch))
EvanAnderson 8 hours ago [-]
> Looks good, but I want to MitM my network.

Increasingly that kind of requirement puts you in the same camp as oppressive nation states. Being a network operator and wanting to MitM your DNS makes you a political actor. Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers. (See https://pc.nanog.org/static/published/meetings/NANOG77/2033/...)

mannyv 2 minutes ago [-]
With mikrotik and presumably other vendors you can force dns to your dns. I do this so i can pi-hole everything and see what sneaky things devices are doing.
bluGill 8 hours ago [-]
Fortunately I own my firewall. Though mostly I'm talking about linux machines that I own and control the software on.

Though I fully understand I'm in the same camp as oppressive nation states. But until my kids get older I'm in charge, I need to set them up for success in life, which is a complex balance of letting them have freedom without allowing them to make too many bad decisions. Not getting their homework done because they are watching videos is on bad decisions I'm trying to prevent.

throwway120385 5 hours ago [-]
Importantly, this is a reasonable thing to do because sites like Youtube are designed to draw their attention away from whatever important thing they're doing so that Youtube can serve them advertisements. So anyone thinking a parent trying to control what their kid watches is oppressive somehow is pretty deeply in the wrong. As a parent myself I would consider doing this to keep my son from falling into the traps that are set by giant multinational internet companies like Google to get him to form habits around Google instead of habits around what he wants or needs out of life.

So really instead of thinking about this like "parents are acting like nation states" I think it's much better to think of it like "parents are countering corporate nation states."

EvanAnderson 5 hours ago [-]
It's totally reasonable. My position is that I think network operators and owners should be able to do that they want. I was just pointing out that virtually any time a network operator or owner wants to control the traffic in their network a certain crowd comes out of the woodwork and decries abuse by bad actors.
EvanAnderson 5 hours ago [-]
> Fortunately I own my firewall.

I was thinking more about embedded devices that people buy but don't own (Chromecast devices, "Smart" home doodads, etc). You can stick them in a VLAN and filter their access to the Internet but they're inscrutable inside and have opaque, encrypted communication with their "mother ship".

I think your goal with your kids is laudable. I do the same thing. It limits the ability to use off-the-shelf devices and software, and I'll get more flak about it as my daughter gets older and is excluded from the "social" applications that I can't allow her to use because they're closed-source and not able to be effectively filtered. I'll burn that bridge when I get there, I suppose...

tredre3 6 hours ago [-]
> Devices you paid for, but don't actually own, will end-run your efforts by using their own hard-coded DNS servers.

Not just devices, Jetbrains software has hardcoded DNS too. I've had to resort to blocking its traffic entirely because of the sheer number of servers and ports it tries in order to work around my DNS blocking, now I allow traffic only during license/update checks. I'm sure other large vendors do something similar.

https://intellij-support.jetbrains.com/hc/en-us/community/po...

recursive 4 hours ago [-]
Parents are the oppressive nation-states of their families.
andiareso 7 hours ago [-]
What services are you self hosting for local YouTube? Right now I just hand pick videos and they get lifted by my plex server, but having a nice route to my internal YouTube will be great for when my kids get to that age!
bluGill 7 hours ago [-]
I'm looking for an answer to that. https://invidious.io/ looks like what I want, but I haven't tried it to see.
lenova 4 hours ago [-]
Out of curiosity, which software/app are you using to MitM on your home network?
bluGill 3 hours ago [-]
Currently I'm not. I would like to, but I'm not sure how to make it work. If I have a youtube video that I downloaded, I can make youtube.com point to my own web server, but everything after the domain needs to point to the correct things to make it play and I'm not sure how to do that (I also haven't looked).
ndriscoll 2 hours ago [-]
You'll probably have an easier time blocking youtube (or the Internet in general) on the devices in question and running something like Jellyfin locally to serve your library.
bluGill 2 hours ago [-]
The hard part is my kids' online piano lesson embeds youtube videos for the lesson. they have enough other content that I paid for an account for my kids, but the videos direct to youtube not someplace they host which means I can't block any of youtube. This is a common way to do things - my kid's school often sends them to some youtube video for some lesson.

Of course once you finish one youtube video it switches to a "you might want to watch next" which is not the educational content I want them on.

jc__denton 3 hours ago [-]
One headache I've had with internal LE certs is bots abusing the CT logs to attempt probing internal names. As a result, I started requesting wildcard certs from LE. Somehow that feels less secure, because even though I'd probably recognize abuse of the cert - friends and family wouldn't. It's the same reason I don't want less technically adept friends and family having to deal with my own CA. Install one arbitrary cert ... what's the problem with this random, sketch one I downloaded?
christina97 8 hours ago [-]
Dumb question: lots of folks are talking about name constraints not being understood by old clients since they don’t understand that extension. But is this not exactly the point of critical designation in extensions: is the client not supposed to fail if it comes across a critical extension it doesn’t understand?
michaelt 5 hours ago [-]
For one thing, the fact something's supposed to fail on unexpected input doesn't always mean it will fail.

For another, some implementations thought they understood name constraints, but had bugs in their implementations. For example, applying name constraints correctly to the certificate's Subject Alternate Name but not applying them to the Common Name.

dfox 4 hours ago [-]
As for the overall X.509 ecosystem (not limited to name constraints), the certification validation logic of common clients accepts various subtly, but completely, invalid certificates because CAs used to sign (or even use as root certificate) various kinds of invalid certificates, one can probably even find a certificate, that should be logically trusted, but isn't even a valid DER encoding of the (TBS)Certificate.
max_ 7 hours ago [-]
Is there a simple tutorial on how I can use the .internal domain for web applications within my own private network?
xyst 2 hours ago [-]
wonder if it's just better to not deal with name constraints and self signed certs. lets encrypt issues certs for domains with dns validation.

so why wouldn't something like this work:

- designate sub domain for private network usage (ie, *.internal.example.dev)

- issue certificates using ACME compatible script/program (ie, lego) for devices (ie, dev1.internal.example.dev, dev2.internal.example.dev)

don't have to deal with adding self signed certs to trust stores on devices. don't have to deal with messiness of name constraints compatibilities across apps. just plain ole TLS

speakspokespok 2 hours ago [-]
If your audience is North American English speakers, on the login page, use "For the full experience".

Using fullest in the context of 'lives life to the fullest' is grammatically correct but English is strange and for this context you'd want the former.

cyberax 3 hours ago [-]
One problem with wildcard certs is that any host can impersonate any host within the wildcard zone.

It would be great to be able to get a certificate for an intermediary CA, that is limited to one domain. And then use this CA to issue certs as needed.

billpg 12 hours ago [-]
Is "name constraints" new? I wanted to do something similar a decade or two ago and found I'd have to be trusted for all domains, which I wanted to avoid.
michaelt 12 hours ago [-]
It's been around since ~2008 when rfc5280 was released.

But it's long been stuck in a cycle of "CAs won't issue name-constrained certificates because not all clients support it properly" and "Clients don't bother to support it properly because CAs won't issue name-constrained certificates"

And even if today's clients all support it properly - there will always be some users running ancient smart TVs and android phones that haven't received a software update in a decade.

toast0 5 hours ago [-]
A decade ago, name constraints was available, but support wasn't really there. I was looking into making a company CA for internal tools, but I didn't want to be able to MITM employees going to unrelated sites, and I couldn't mandate specific browsers, so we ended up using a commercial CA for everything.

It looks like support is fairly wide now, but you'd probably still need to test and confirm it works with all the tools you want, and there's still some risk to users in case the constraints don't catch everything.

6 hours ago [-]
rompledorph 5 hours ago [-]
I would like my public CA to sign me an intermediate certificate with my domain as named contraint. So I could use the built in trusted CAs
Wowfunhappy 9 hours ago [-]
Is there really any benefit of this over just using HTTP?

What is the threat model in which an attacker could MitM your internal network?

NotPractical 6 hours ago [-]
* Some functionality is off-limits for sites loaded via HTTP. (Another commenter mentioned clipboard access.)

* Browsers will display annoying warning symbols whenever you try to access sites via HTTP.

* If you live in a shared living space such as an apartment you probably don't have control over your home network.

* Even if you have control over your network, a single compromised IoT device is enough to sniff your internal network traffic, assuming WPA2. (Probably not super likely tbh.)

poincaredisk 6 hours ago [-]
>What is the threat model in which an attacker could MitM your internal network?

Police raid on your home/company. Malware on a router. Malicious actor in the server room. Possibilities are endless.

SSL added and removed here ;-)

(this is a reference, look it up if you don't recognize it)

marginalia_nu 4 hours ago [-]
Router malware is the one thing out of those thing that seem plausible.

If you have physical access, TLS isn't much protection against eavesdropping. At that point they can just compromise your hardware instead.

cesarb 6 hours ago [-]
> Malware on a router.

It doesn't even have to be on the router, just the same network segment plus some ARP spoofing tricks (assuming your switch doesn't have ARP spoofing protections or they haven't been enabled) could be enough to MitM a connection.

8organicbits 9 hours ago [-]
I travel between networks with my phone and laptop. Software will ping out using whichever network I'm on, trying to connect to its backend. If I connect to hostile/compromised WiFi, those connections are at risk.
yjftsjthsd-h 5 hours ago [-]
Can't any client on the same wifi read your traffic by just putting their wifi card into promiscuous mode? Obviously depends on who uses your wifi and your threat model, but that seems like a problem.
NotPractical 2 hours ago [-]
Yes, on WPA2. WPA3 introduced per-client encryption keys.
feirlane 9 hours ago [-]
On use-case I hit just recently is web apps hosted in my internal network, without https, Firefox won't allow me to click the "copy to clipboard" buttons on those pages
klysm 9 hours ago [-]
But a domain and use that for your internal network. You can use let’s encrypt and even get wildcard certs
egberts1 9 hours ago [-]
So close, so close.

Has everything, except the nigh to impossible DNSSEC-support for LAN; good stuff, nonetheless.

ninkendo 10 hours ago [-]
If only there was a system to hint in DHCP (or a v6 RA) what certificate authority serves the .internal domain for the current network.

Devices would treat .internal as special and would validate that the hinted CA only applied to that subdomain, and would only use that CA when connected to the corresponding network.

Or maybe the DHCP/RA could hint at keys to use to validate DNSSEC for the internal DNS server, and the CA cert could live in a well-known TXT record…

Then you could have all devices work with internal certs out of the box with no config. One can dream…

juliuskiesian 12 hours ago [-]
How does this compare to mkcert?
kreetx 12 hours ago [-]
mkcert might be getting this as well: https://github.com/FiloSottile/mkcert/pull/309/commits/92215... (this is linked from the current submission's readme)
globular-toast 9 hours ago [-]
I went down this path, but installing CA certificates is a pain. There isn't just one trust store per device, there are many. Make your own CA if want to find out how many there are...

Like others I went with just having my own domain and getting real certs for things.

Am4TIfIsER0ppos 5 hours ago [-]
No, why would I want that? Do I not trust my switches and routers to send the packets to the right host? Do I not trust my DNS to send the right address for the hostname? Do I not trust the other devices on the network to not be sniffing? Okay maybe that one.

Browsers could stop with their false warnings about password forms being insecure then I'd be happy.

nh2 10 minutes ago [-]
ARP, DHCP, DNS, are all unencrypted and spoofable protocols.
9 hours ago [-]
Havoc 14 hours ago [-]
tbh the skill level require for a valid wildcard cert isn’t all that high either.
kreetx 12 hours ago [-]
This is about creating and using a domain-restricted CA, which is then used to create server certificates. Point being that your (tech savvy) friends are willing to install the CA because it can only ever validate some specific subdomains (and not MITM the entire internet).
dveeden2 13 hours ago [-]
VALIDITY_DAYS="3650"

I'd rather see something with 90 days and ACME. Note sure why there isn't a simple certificate management tool that does this and maybe even brings a simple UI?

kreetx 12 hours ago [-]
This is for an internal network.
tux2bsd 12 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 21:42:20 GMT+0000 (Coordinated Universal Time) with Vercel.