> what do you do if your ISP changes your IP addresses?
I update the DNS record. Manually. It's a once in a blue moon thing, and I assume the probability of it is low enough that it will not occur when I'm so far from home that "it can wait until I get home" doesn't suffice.
15+ years or so now, and that strategy has worked just fine.
… TFA's intro could do with explaining why the IP is so hard coded in the cluster, or in the router? My home router just does normal port-forwarding to my home cluster. My cluster … doesn't need to know its own IP? It just uses normal Ingress objects. (And ingress-nginx.) I'm wondering if this is partly complexity introduced by having a |cluster| > 1, and I'm just on duck tales here. Y'all have multiple non-mobile machines? (I have a desktop & a laptop. I'm not running k8s on the laptop… because it's a laptop. I … suppose I could … and deal with connectivity to the desktop with like Wireguard or something but … why?)
My previous ISP offered static IP addresses, and I had one, since I had a somewhat special offer where the price wasn't terrible. It changed on me one day. They refused to fix that. I was very disappointed.
kukkamario 103 days ago [-]
MikroTik has dynamic DNS that is based on random unique number for their router. I just point my DNS record to that dynamic DNS address and everything just works.
rmwaite 102 days ago [-]
eero also offers this, but only if you pay for their Plus subscription.
vegarde 97 days ago [-]
Author here. A few decades in IT, but still like to learn new things at home.
I do port forwarding for IPv4. But port forwarding on IPv6? You must be kidding me, the /56 I get from my ISP is meant to be used on the inside. It hasn't changed yet, and maybe it never will.
I am running Unifi as router, and they are claiming that the prefix IDs assigned to the networks won't change. But I have had it happen - probably through my own fault. Wish I could just hardcode the prefix. As for the metalLB IP addresses, since writing this I have done away with hardcoding the actual IP addresses and just let the pool assign them. It still means I need to update the firewall rules on my router, though, should they change. And my metalLB range would of course need to be updated.
The IPv4 is still hardcoded due to the port forwarding, and since it's private space that I won't change anyhow.
As for why I don't do it manually, you must be kidding me. You are. reading hacker news, and you question why people tinker with complex automations and just don't do it manually? Because I can of course, and that's the only reason I need.
I like to pretend like my home network matter, that I run a mission critical service.
I run services at home. Multiple. Because I can, and I like to learn. The blog runs at home, of course, even though hosted blog services have existed for years now. I also run Nextcloud. It saves me the cost of extra storage on Dropbox and/or Google Cloud, I do it to my home network instead. Then there's plex, home assistant plus all the other services a geek would like to run. They all run in my K8s cluster now. I was served well by docker for years, but I wanted to test K8s, and that was the only reason I needed.
And a cluster? Well, it's called that in Kubernetes. But in reality, it's only one node. Because two would be overkill for home usage.
To sum it up:
I did it because
1) I wanted to
2) I could
3) I would learn something from it
4) It would give a modest benefit.
It's all the reason I needed to.
As for why automating things I could do it manually should it happen, see 1, 2, 3 and 4 above.
asmor 103 days ago [-]
> Manually. It's a once in a blue moon thing, and I assume the probability of it is low enough that it will not occur when I'm so far from home that "it can wait until I get home" doesn't suffice.
It's very common here (germany) to be forcefully disconnected and reassigned an IP from the pool once a day here, especially on DSL contracts - and we still got a lot of vectored to the limit DSL. It's in fact so common, that JDownloader has a client for some common routers built-in so it can automatically dodge one-click-hoster IP limits.
And our cable internet companies all use CGNAT by now, good luck getting anything through that.
mystified5016 103 days ago [-]
My ISP lets me have an IP for as long as my ONT is online. If it reboots, I'm likely to get a new address if it's offline long enough.
So my ONT is on a UPS. We only get power outages once or twice a year, but I havent had my IP change in two years now.
And yeah, when it changes I just go into my DNS and update the records. It's like a five minute job. I could probably automate it if I cared to figure out Hover's API. It'd take probably a dozen resets to get a return on that time investment so I just haven't bothered.
I'm not sure if this is a regular feature of fiber networks or just that my ISP is nice. So much better than DOCSIS when I'd randomly get booted off and given a new address.
vegarde 97 days ago [-]
OP here.
Yeah, I don't expect this to really change, either, in normal operations. But I'm a home user, I don't pay for a static IP address or a static IPV6 prefix.
But I'm a geek/hacker by heart, so curiosity sometimes get to me.
I did it because there was things to learn from it.
taskforcegemini 100 days ago [-]
most likely not DOCSIS related, as my ISP behaves with it just like you described your current ISP
storrgie 103 days ago [-]
Potentially they are doing some hairpin rules that require specific enumeration?
TZubiri 104 days ago [-]
Crazy that someone is using something as complex as k8s on a home server and without knowing basics.
Newbies are better served starting with the simple stuff and then moving to the complex if needed
flessner 103 days ago [-]
Well, we've all got to start somewhere, right?
But yeah, I'd personally recommend Docker for self-hosting. Kubernetes or Proxmox always end up being too much to handle for personal use - or even small to medium sized companies.
cassianoleal 103 days ago [-]
> Proxmox always end up being too much to handle
I've been running a 2-node Proxmox cluster for about 3 years with close to no maintenance. What's too much about it?
It gives me easy VM and LXC management and very easy declarative networking and firewalling. That alone makes it worth it for me.
JamesSwift 103 days ago [-]
Right, proxmox (like k8s) is a huge force multiplier for individuals trying to manage large/diverse surface areas. I run both for my own stuff at home (well k8s in the cloud, but its not work related) precisely because of the power it offers.
Yes they are complicated. Yes, they are still worth learning. And once you learn them they make a lot of sense to use when given the option.
gh02t 103 days ago [-]
I think OP is saying Proxmox isn't really that complicated compared to K8. I guess my perspective is warped since I know what I'm doing pretty well, but installing Proxmox and setting up some basic VMs or LXC containers (especially if you use the helper scripts) isn't that hard. Sure, there is still some added complexity, but I think that's more than offset by the ease of doing backups alone. Meanwhile, my experiences with K8s have all been mostly painful and I didn't really feel like I gained anything to offset the complexity versus just using Docker (for my personal use, obviously K8 makes sense at enterprise scale or if you use your homelab for learning).
robertlagrant 103 days ago [-]
I think it's good. K8s is well packaged (these days) and quite discoverable. I agree it's likely to throw up some problems, but it's great that it's possible.
TZubiri 103 days ago [-]
I'll be honest, never learned it, worked up from network protocols, reached docker in terms of virtualization, and of course OS fundamentals.
But even in those 3 areas I haven't exhausted all the knowledge and features available.
So I'm just skeptical when someone is using k8s and hasn't mastered the fundamentals. How do they know whether they should be using a high level k8 feature or a low level os feature? Happened with docker a lot to me when I didn't know better, I was learning how to set memory limits on containers and reset policies instead of learning to do it on the OS level.
vegarde 97 days ago [-]
OP here.
Of course I haven't mastered he fundamentals of K8s yet, I am just beginning to learn it.
Like you, I have been using docker for years, and to be honest it served me very well - and would probably continue to do so. But as a geek, I have a natural curiosity, sometimes just the curiosity of whether or not something can be done, and how it works, is enough justifications to do it.
As you probably read, this is in my home services. Which is where I am free to do exactly these experiments before I know the inns and outs of it.
Some of what I wrote in the blog post, I have already ditched. I am no longer hardcoding the ipv6 addresses from the pools, for example, but I still need to change my pool ip addresses if my ISP changes the range. I am not sure if my way is the correct one. I could do it manually, but what's the fun in that ?
There will be more blog posts as I learn, you be sure (there already is). I have more coming up, but one of my current projects will take a bit of time, as I have decided that writing a Unifi Operator for managing firewall rules is a better way to do it :)
Now, for home use? Whoever does that, write K8s Operators?
If you're on hacker news then you should understand that the goal isn't necessarily the result, it's the learning experience.
Operyl 103 days ago [-]
> reached docker in terms of virtualization
Except docker, on its own without something else in the stack, isn't Virtualization.
TZubiri 103 days ago [-]
Not arguing this again, go edit the wikipedia article if you are so confident
robertlagrant 103 days ago [-]
Docker is kernel virtualisation. Are you thinking of OS virtualisation, like a VM?
xrisk 103 days ago [-]
Docker does not virtualize the kernel, in fact the kernel version “inside” Docker is the same as the host.
Timber-6539 103 days ago [-]
Virtualization usually refers to OS/ device emulation in software. Docker uses kernel namespaces which is an entirely unrelated feature.
TZubiri 103 days ago [-]
I find it funny how some obtuse devs are unable to use abstraction in software of all things.
anonfordays 103 days ago [-]
Docker is OS-level virtualization. VMs are hardware virtualization. Different layers.
linuxdude314 103 days ago [-]
It’s not virtualization, it’s namespaces. Docker makes use of Linux kernel features; started out with cgroups and now uses libcontainer. Each container is running in its own isolated(ish) namespace on the same host kernel.
It’s _very_ different technology than virtualization.
You don’t need docker to make a container on Linux (or Solaris for that matter).
anonfordays 102 days ago [-]
>It’s not virtualization
You are incorrect, this is OS-level virtualization:
"OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman)..."[0].
>it’s namespaces. Docker makes use of Linux kernel features; started out with cgroups and now uses libcontainer. Each container is running in its own isolated(ish) namespace on the same host kernel.
Yes, OS-level virtualization.
>It’s _very_ different technology than virtualization.
Incorrect, this is a virtualization technology.
>You don’t need docker to make a container on Linux (or Solaris for that matter).
That isn't even true, you share your host kernel. There are parts of the kernel that aren't namespaced as well. The kernel keyring is probably the big one.
anonfordays 103 days ago [-]
>That isn't even true
You are incorrect, this is true:
"OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman)..."[0].
>you share your host kernel
Kernel != OS
>There are parts of the kernel that aren't namespaced as well. The kernel keyring is probably the big one.
You can call it what you want but absolutely no one considers chroot virtualization in any meaningful sense. Nothing is being virtualized, containers are just regular processes on the host system.
"OS Virtualization" != "OS" "Virtualization"
TZubiri 103 days ago [-]
1st of all yes, many people consider not only chroot to be virtualization (of the file system). Yes it is arguable as it is the birth of lightweight virtualization. But you were wrong in saying no one does.
2nd containers go farther and virtualize network, and other resources.
anonfordays 102 days ago [-]
>You can call it what you want
I call it as it is.
>but absolutely no one considers chroot virtualization in any meaningful sense.
Absolutely everyone who's knowledgable in virtualization considers chroot to be a type of OS-level virtualization.
>Nothing is being virtualized, containers are just regular processes on the host system.
Wrong, "...OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances..."
"OS Virtualization" == "OS " + "Virtualization"
lawn 91 days ago [-]
Running something complex at home to learn is definitely a good idea.
npodbielski 103 days ago [-]
Probably he is not a newbie entirelly since he was able to run kebernetes.
Anyway I agree that there no point in using k8s for home stuff. Single instance of anything should be sufficient for any needs like that.
On the other hand maybe someone just like to tinker with technology.
vegarde 97 days ago [-]
Exactly (OP here).
The goal wasn't necessarily the result.
I like to tinker.
And well...my K8s cluster is only one node so far, so there's limits to what I can play with.
And please: No comments that K8s is overkill. I know, and I don't care :) There's things to learn from it, and that's good enough reason for me.
homodyne 103 days ago [-]
[dead]
manofmanysmiles 104 days ago [-]
How about a wireguard tunnel from an ingress box? You still pay for one VPS, but can run everything locally and just load balance at the ingress. I just manually add configs to nginx, but there are automated tools too.
TZubiri 103 days ago [-]
Lol, kind of defeats the purpose
yjftsjthsd-h 103 days ago [-]
What defeats what purpose? I don't run k8s out of some love of ... managing external IPs?
TZubiri 103 days ago [-]
Tunneling through a single external node defeats the purpose of hosting k8s in home server.
Maybe the external ingress node can be a load balancer controlled by the k8 cluster. But then you still have to communicate with the home server and it has no exposed ip address
winterqt 103 days ago [-]
> Tunneling through a single external node defeats the purpose of hosting k8s in home server.
How so? You can just rent a cheap server to tunnel through, while having the benefits of your home machine(s) for compute.
> Maybe the external ingress node can be a load balancer controlled by the k8 cluster. But then you still have to communicate with the home server and it has no exposed ip address
Do you mean that you wouldn’t be able to access the K8s control plane endpoint then (which you could if configured properly)? Or something else?
TZubiri 103 days ago [-]
>how so?
SPOF
Daviey 103 days ago [-]
And having a single IP address, with one ISP at home isn't a SPOF?
Daviey 103 days ago [-]
@TZubiri, Then if that is a risk you accept, you could have multiple VPS's and load balance back to your home network, eliminating the new SPOF.
(@'ing because we reached the maximum reply limit)
TZubiri 103 days ago [-]
@Daviey
Or just get your very own static IP.
It's a ZPOF.
Routing happens automatically on nearby router routes
It's deep down a matter of taste, you have a home server in Arizona and you route users to a Hetzner server in Germany and then back?
Don't justify, just recognize it's in bad taste, seek to use ip addresses as geographical host identifiers. Do not hide origin or destination. Minimize
TZubiri 103 days ago [-]
You are adding a(nother) SPOF.
asmor 103 days ago [-]
Are you even running a real homelab if you're not running MetalLB in BGP mode?!
Sarcasm obviously, but it's a fun exercise, especially if you get a real (v6, probably) net to announce.
vegarde 97 days ago [-]
It's on my bucket list.
My Unifi Cloud Gateway Max doesn't (yet) support BGP, but other Unifi devices do, so I do hope that it'll come to my device to. If not, I'll have to think of other ways to test it. But in the meantime, there's plenty of other stuffs to learn.
I have a real, ISP-routed IPv6 net to play with.
linuxdude314 103 days ago [-]
Right?! Get some free IP space from he.net tunnel broker and build your own IPv6 AnyCast network using Quagga for BGP.
Kidding about building your own AnyCast network (although you really could…), but he.net tunnel broker is GOAT.
asmor 102 days ago [-]
I wish Happy Eyeballs worked better or let me absolutely prefer IPv4, because every time I set up a free v6 tunnel I get banned about 2 days layer for pushing a few terrabytes over it, when all I wanted was to be able to SSH into every one of my containers on a cheap VPS separately. Or the tunnel is so slow, I'm degrading my entire internet connectivity.
TZubiri 102 days ago [-]
>free v6 tunnel
There's your problem. You need some form of cost attached to some identity assets, anything under the IANA umbrella, ips/domain names. This is in order to prevent sybil attacks. This is all well studied under he hashcash bitcoin era as PoW.
So yeah, you actually need to spend some money not in exchange of something here, but as the very thing you need, you need to distinguish yourself from those that spend 0$, not because they are cheap, but because they may do it 1000 times and ruin it your pooled reputation.
asmor 100 days ago [-]
Unfortunately there's quite a market gap if you need bandwidth. I can't just pay HE 10 bucks a month, their service doesn't work that way. Hetzner would work, but their IP space is very often randomly blocked from lots of things.
CoolCold 100 days ago [-]
> because every time I set up a free v6 tunnel I get banned about 2 days layer for pushing a few terrabytes over it, when all I wanted was to be able to SSH into every one of my containers
can you describe a bit more? I cannot connect the dots here on how terabytes are tied to free v6 tunnel - likely I'm missing some details. Thank you in advance.
asmor 100 days ago [-]
If someone else provides IPv6 connectivity to you, you use their bandwidth. Some apps like Steam see IPv6 connectivity and use it regardless of what you'd prefer it to use, hinting mechanisms and all. So while I just wanted to use the tunnel for things that IPv4 does not provide, I always end up tunneling half my traffic over it, which free services don't like.
Aachen 103 days ago [-]
I do this for email after I got a new IP address from the shit KPN pool instead of the clean XS4ALL pool. Outgoing email proxies through an IP address at Hetzner. It's not pointless because
- I get specs from an old laptop (that I had laying around anyway) that would probably cost like 50€/month to rent elsewhere. Power costs are much lower (iirc some 2€/month) and it just uses the internet subscription I already had anyway
- When I do hardware upgrades, I buy to own. No profit margin, dedicated hardware, exactly the parts I want
- All data is stored at home so I'm in control of access controls
- Gigabit transfer speeds at home, which is most of the time that I want to do bulk transfers
I see various advantages that still exist when you need to tunnel for IP-layer issues
Edit: got curious. At Hetzner, a shared cpu with 16GB RAM is 30€/month, but storage is way too little so an additional 53€/month just for a 1TB drive needs to be added (at that price, you could buy one of these drives new, every month again and again and still have enough money over to pay for the operating electricity; you'd have a fleet of 60 drives at the expected lifetime of 5 years, or even at triple redundancy you get 20TB for that price). I'll admit the uplink would be significantly better for this system, but then my downlink isn't faster than my uplink so at home I wouldn't even notice. Not sure how much of a difference a dedicated CPU would make
At AWS, I have to guess things like iops (I put in 0 to have a best-case comparison), bandwidth (I guessed 1TB outbound, probably some months is five times more, some months half). It says the cheapest EC2 instance with these specs, shared/virtual again mind you so no performance guarantees, is t4g.xlarge. With storage and traffic, this costs 301$/month which I guess is nearly the same in euros after conversion fees. If I generously pay 3 years up front, it's only 190$ monthly + 2'156$ up front, so across 3 years that's 250$/month (and I'm out of over 2 grand which has an expected average return of 270€ at the typical 4% in an all-world ETF — I could nearly fund the electricity costs of the old laptop just from the money I lose in interest while paying for AWS! Probably 100% if I bought a battery and solar panel to the value of 2150€)
I actually have more than 1TB storage but don't currently use all of it, so figured this is a fair comparison
The proxy I currently have at Hetzner costs me 4€/month, so I save many multiples of my current total cost (including the at-home costs) by self hosting.
dddw 103 days ago [-]
For cheap storage at hetzner you could add a storagebox (not fast, but fine), or now even objecstorage.
mychael 103 days ago [-]
This is an example of optimizing something that shouldn't exist. They can simplify all of this by adding Cloudflare tunnel or Wireguard to proxy traffic from the outside world to a k8s Service running in the cluster.
davkan 103 days ago [-]
I have one A record for my home ip address. This is dynamically updated by my router whenever the public IP address changes. Everything else is a CNAME pointing at the A record. Completely set and forget and supported by most of the shelf consumer routers or router OS like vyos.
This is a much preferable solution to me as there are no changes to external-dns resources when the public IP changes. Granted, i don’t run a dual stack setup.
vegarde 97 days ago [-]
I run dual-stack.
I used to run a dynamic DNS service, but these days I prefer to do it myself towards the API of my DNS provider. External-DNS in k8s is pretty neat that way.
Changing the K8s resources automatically is not really a big deal. It's a fun exercise.
merpkz 103 days ago [-]
Kubernetes admin here with ~2y experience. Since a lot of you have misconception of what this guy is doing I will try to explain. Author wrote a piece of code which will interact with network gateway to get IPv4/IPv6 network address and then update kubernetes configuration accordingly from within a container running on said cluster. That seems to be needed, because MetalLB component in use exposes kubernetes deployments in cluster via predefined IPv6 network address pool which is given from ISP, so if that changes, cluster configuration should change too. This is one of most bizarre things I have read about kubernetes this year and probably shouldn't exist outside a home testing environment, but hey, props to author for coming up with such idea.
vegarde 97 days ago [-]
OP here.
Thanks. If I was a company, I would probably be in control over when my IPv6 range changes. And if my ISP is any good (I just recently switched to it), my IPv6 network should stay the same.
The network range in a home setting is always given by your ISP, most likely with DHCPV6 prefix delegation, very rarely do you in a home setting dish out for a permanent IPv6 network range. Granted, most decent ISPs try to persist it, since there's no good reasons not to, and it's a strong recommendation from standardization bodies etc. But it's still just best effort, accidents happen, state get lost, and suddently you have a different network.
Sure, it's probably take me less than an hour to just change everything, but we are hackers here, so what's the fun in that? At least I gravitate towards perfecting things even beyong pure needs, just because I can. At work, I have to call it a day when it gives no more significant gain, at home I am free to think "this is fine, but can I actually do it better?". If the answer is yes, and you have the time, I'd say go for it. Some people like to watch cat videos on Youtube, I prefer to tinker with getting stuff to work. Sometimes it's useful, sometimes it's just for the fun of it.
I'm on my way to improve this, by the way. I plan to create a Unifi Networking Operator that can help me not only this, but to configure my Unifi Gateway and firewall rules through Kubernetes properties. It will be more logical to let my "dynamic IP" setup just change Kubernetes properties, and let the OPerator handling the Unifi Configuration of it.
Overkill? Hell, yes! Fun? For me, at least? Will I learn something? Yes, I will learn how to create a Kubernetes Operator!
Yeah, I'm a beginner in Kubernetes, but not in IT and sysadmining in general, I've got 30 years experience there. For now, Kubernetes is a just-for-fun project at home, but it's used to run my day-to-day home services, which makes it even more fun to improve it. We use Kubernetes where I work, but not in my area, it's not inconcieveable that my home-tinkering will be of benefit at work, some day.
And yes. I run a personal blog (in my Kubernetes cluster). I try to make it a bit educational, with more or less repeatable experiments for people to pick and chose from.
Some will be good, some will probably be a bad idea. But as long as there's learnings to be had, it's worth doing.
TZubiri 103 days ago [-]
"My ISP is in total control over my external IP addresses. I don’t pay for permanent IP addresses, and while they haven’t so far changed neither my IPv4 address or my IPv6 network, it can happen. Probably by mistake, since I have no kept my current ones for three months"
If you can't shell a buck or persuade your isp to reserve a static ip for you. Try to persuade their dhcp server.
And, again, if you can't handle fundamentals, drop the Google level tech. You are not that deep.
kuschku 103 days ago [-]
> If you can't shell a buck or persuade your isp to reserve a static ip for you. Try to persuade their dhcp server.
My ISP offers either a home contract, where the IP forcefully changes every 24h and I can't pay for static IP, or they offer business contracts for businesses with minimum 10 employees, but that contract requires proving you're a registered as business with the local chamber of commerce & have the tax paperwork.
Many people have no easy option to get static IP.
voxadam 103 days ago [-]
> My ISP offers either a home contract, where the IP forcefully changes every 24h and I can't pay for static IP, or they offer business contracts for businesses with minimum 10 employees, but that contract requires proving you're a registered as business with the local chamber of commerce & have the tax paperwork.
Wow, and I thought Comcast was customer hostile, this is just bonkers.
kuschku 103 days ago [-]
German ISPs are kinda weird like that. Unmetered gigabit FTTH for $70/month, no CGNAT, and open peering. 3 phone numbers and 2 phone lines included. But static IP isn't even an option.
I've built custom dyndns scripts to automate everything away, so nowadays it's only a second of interruption (thanks to some DNS TTL trickery), but it's nonetheless really annoying to deal with.
TZubiri 103 days ago [-]
>Wow, and I thought Comcast was customer hostile, this is just bonkers.
It's caused (magnified by upstream requirements by the RIR due to ipv4 exhaustion and possibly reputation mgmt.
In order to get a block isps need to provide a plan on how they will put the ips to good use for the benefit of society.
homodyne 103 days ago [-]
It's really concerning that we have people trying to use tools like Kubernetes without understanding the basics that underlie them, like networking.
Post author should read Beej's Guide to Network Programming and come back when that's comfortable for them.
vegarde 97 days ago [-]
All of that is sort of "best effort" when it comes to the home market. While an ISP should honor it and give you the same IP address (or IPv6 range), at home we're in the consumer market where it's good enough if you're able to hit youtube from your browser.
I know my way around networking, I know that my networking equipment will always try to request the same IP address/IPV6 range. But I can't do shit with it if my ISP has lost my reservation and given that to someone else.
And as numerous other comments here have shown, some ISPs are incompetent bordering on malicious, but sometimes, they're the best you have. My ISP is pretty decent, but that doesn't keep me from playing with "what if"-scenarios and solve things that may not ever happen for me.
I learnt something from it - not the core networking stuff, there is nothing in that that was new to me, but I leart a lot about how to do things more dynamically in Kubernetes, and I had to dig into the APIs of my router to achieve what I needed.
However, when I write a blog post, I also try to assume not everyone has a deep understanding of the things. If someone read my blog post and just then learnt about IPv6 addressing, that's great! Then they too would have learnt some things, even if Kubernetes is not for them.
from-nibly 103 days ago [-]
Holy cow, I've been doing kubernetes for 8+ years at this point. No idea why your home IP address would change a single thing in kubernetes.
advisedwang 103 days ago [-]
In the opening sentence the author says they are using external-dns to set outside DNS to point to their cluster. You need the IP address for that.
(although they'd be better served by just using dynamic DNS rather than this complexity)
from-nibly 103 days ago [-]
But why would you expose your cluster directly to the internet in the first place?
vegarde 97 days ago [-]
It would if you are running your IPv6 and real IPv6 addresses on your K8s ingress adresses.
And I do.
globular-toast 103 days ago [-]
Uhh. What is all this for? My IP address can change. I just use a dynamic DNS client to update my DNS record using my registrar's API. It's been this way since, like, 2001? (Well, most registrars didn't have APIs back then, but there was dyndns).
Saris 100 days ago [-]
Nothing, inside the network addresses don't change, so K8S will never see anything change even if your external addresses update.
david24802 103 days ago [-]
Thanks for the post. I ran into the same issue with assigning IPv6 addresses to k8s pods. Wish there were easier solutions to handle the prefix changing.
vegarde 97 days ago [-]
A good advise: Never use a real range for the PODs.
I run K3s, and it's not supported to change the internal ranges after creating the cluster.
On the ingress, i.e. with metalLB, it's different, of course. And that is what my solution is for.
I'm going to improve it in the coming weeks. It's not going to be any easier, though, it will just be more fun for me :) (I like to learn new stuff).
It's probably going to be a bit more robust and logical, though, not look as hacked-together than my current script. But unless I actually get around to making a plugin architecture for it, it's still going to assume you're using a Unifi gateway.
wutwutwat 104 days ago [-]
Dealing with changing residential ips is nothing new. It's interesting to see how it's still being solved for even in this overly complex k8s landscape we find ourselves in now.
Back in the day we'd use free services like https://freedns.afraid.org/ on a cron to refresh the ip every so often.
I used afraid to refresh my dial up ip address, for my "hosting service" domain. The "hosting service" was an old tower pc living in the cabinet underneath a fish tank. Ops was a lot different back then...
Nowadays, if you're poking holes in your firewall and exposing your ip address to the world, you're doing it wrong. We've moved away from that model. There's no need to do that and expose yourself in those ways, when you can instead tunnel out. Cloudflare/argo tunnels, or tailscale tunnels, dial out from your service and don't expose your system directly to the open internet. Clourflare will even automagically set the dns for your domain to always route through that tunnel. Your isp allocated ip address is irrelevant, and nothing ever needs it because nothing ever routes to it. Your domain routes to a cf endpoint, and your system tunnels out to it, meeting in the middle. No open ports, no firewall rules, no NAT bs. Only downside is, you're relying on and trusting services like cf and tailscale.
joecool1029 104 days ago [-]
Yeah, afraid.org used to work great. I mean the service still works great but Google appears to blacklist any domain with the nameserver there, all email goes to spam. I just found this out in the past month. Kept all my records the same and just moved the nameserver to Fastmail and the problem resolved immediately.
I have an unusual dynamic ip situation I've been taking advantage of with a different system. Some years back I noticed T-Mobile phone lines allow inbound connections on ipv6 (T-Mobile Home Internet unfortunately does not). I have a small weather station I run on a rpi3b I can access anywhere by using ddclient on the pi with cloudflare api key and it sets the AAAA record which is proxied by cloudflare, I leave the A record blank. If any users on ipv4 try to visit the site, cloudflare proxies it for them, works pretty reliably.
LoganDark 104 days ago [-]
> Some years back I noticed T-Mobile phone lines allow inbound connections on ipv6
HELL YEAH. This is exactly how ipv6 is meant to be implemented. They deserve some real praise and recognition for this.
lostmsu 104 days ago [-]
> T-Mobile Home Internet unfortunately does not
Why did they make it so inconvenient?
LoganDark 104 days ago [-]
Home Internet is seemingly far more affordable than their phone plans. So they have other ways of encouraging you to use the phone plans.
joecool1029 103 days ago [-]
Its about the same as my per-line cost. The real reason I suspect is to try to discourage upload usage. T-mobile's main spectrum is deployed with TDD which schedules send and receive on the same frequency. The config they use allocates 80% or 90% of the time to download and is set at network level, not dynamic per device. Basically, they are way more constrained on upload capacity the way things are currently built out.
globular-toast 103 days ago [-]
"Exposing your IP address to the world" seems like such an arbitrary thing to care about when you're opening a tunnel with the express intention of letting people in. No NAT bs, but you've got magic tunnel bs instead that you have no control over. And of course you're still "exposed to the world". Your IP address is public. That's the whole point. So you're going to be using a firewall regardless, what difference does one rule make really?
npodbielski 103 days ago [-]
There are more downsides. Instead of maintaining 1 server you have to maintain server and tunnel or 2 servers and tunnel.
If someone does not know how to maintain internal network DNS and dhcp then if internet is down then your services are down too, because likely there are only reachable through external domain.
I agree though that someone who does know much probably should not do that. If you know what is SSH and root account, probably less is more.
homodyne 103 days ago [-]
[dead]
TZubiri 104 days ago [-]
[flagged]
yjftsjthsd-h 103 days ago [-]
>> ddns
> Yeah don't do that if you want to be a professional with pride in their craft
Why not?
TZubiri 103 days ago [-]
Taste
chgs 103 days ago [-]
I’ve used ddns to deliver service to millions, it’s a helpful tool. There’s no “taste” in a professional world, there are results.
anonfordays 103 days ago [-]
>>ddns
>Yeah don't do that if you want to be a professional with pride in their craft
Why? All of the large DNS providers have DDNS functionality. Even AWS has documented methods for DDNS on Route53.
TZubiri 103 days ago [-]
Taste
anonfordays 103 days ago [-]
What does that mean in context of DDNS? DDNS is the correct proposed solution.
JamesSwift 103 days ago [-]
Ddns is absolutely the right answer in a lot of cases, and even better that a lot of routers have built-in support for managing the record for you.
sampullman 103 days ago [-]
What's a better option than DDNS that doesn't add a bunch of latency to every connection?
TZubiri 103 days ago [-]
Static, dedicated ip address
sampullman 101 days ago [-]
Sorry, I thought the context was an environment in which static IPs aren't an option.
For example, my home internet provider offers them, but the connection is guaranteed to go down once a month for maintenance. So I think DDNS is strictly better in this situation.
gsich 103 days ago [-]
DDNS does not imply foreign domain.
mannyv 103 days ago [-]
You update your cluster with your new IP address.
How you do that depends on your level of expertise.
alfons_foobar 103 days ago [-]
Not even necessary, just update the DNS record pointing to your home address
bdhcuidbebe 101 days ago [-]
> what do you do if your ISP changes your IP addresses?
You use dyndns
citizenpaul 103 days ago [-]
My experience is this is no longer a problem. Ever since the US gov legalized data mining/spying/tracking I have not had my residential IP change. I think its more profitable to spy by essentially giving "free" static IPs to all customers.
dharmab 103 days ago [-]
More likely your ISP uses CGNAT and your router's IP is not a "real" public IP address.
linuxdude314 103 days ago [-]
This could all be solved using HE.net tunnel broker for free…
vegarde 97 days ago [-]
It could, but then all my IPv6 traffic would take a detour around the network.
My ISP gives me native IPv6. Granted, it hasn't changed yet. I am pretty sure it will, some day. Probably by an accident of mine or the ISP.
tamishungry 103 days ago [-]
Huh? I host my domain with Namecheap and it's a simple curl command to update my DNS daily on my Pi. Why all this?
Rendered at 17:41:45 GMT+0000 (Coordinated Universal Time) with Vercel.
I update the DNS record. Manually. It's a once in a blue moon thing, and I assume the probability of it is low enough that it will not occur when I'm so far from home that "it can wait until I get home" doesn't suffice.
15+ years or so now, and that strategy has worked just fine.
… TFA's intro could do with explaining why the IP is so hard coded in the cluster, or in the router? My home router just does normal port-forwarding to my home cluster. My cluster … doesn't need to know its own IP? It just uses normal Ingress objects. (And ingress-nginx.) I'm wondering if this is partly complexity introduced by having a |cluster| > 1, and I'm just on duck tales here. Y'all have multiple non-mobile machines? (I have a desktop & a laptop. I'm not running k8s on the laptop… because it's a laptop. I … suppose I could … and deal with connectivity to the desktop with like Wireguard or something but … why?)
My previous ISP offered static IP addresses, and I had one, since I had a somewhat special offer where the price wasn't terrible. It changed on me one day. They refused to fix that. I was very disappointed.
I do port forwarding for IPv4. But port forwarding on IPv6? You must be kidding me, the /56 I get from my ISP is meant to be used on the inside. It hasn't changed yet, and maybe it never will.
I am running Unifi as router, and they are claiming that the prefix IDs assigned to the networks won't change. But I have had it happen - probably through my own fault. Wish I could just hardcode the prefix. As for the metalLB IP addresses, since writing this I have done away with hardcoding the actual IP addresses and just let the pool assign them. It still means I need to update the firewall rules on my router, though, should they change. And my metalLB range would of course need to be updated.
The IPv4 is still hardcoded due to the port forwarding, and since it's private space that I won't change anyhow.
As for why I don't do it manually, you must be kidding me. You are. reading hacker news, and you question why people tinker with complex automations and just don't do it manually? Because I can of course, and that's the only reason I need.
I like to pretend like my home network matter, that I run a mission critical service.
I run services at home. Multiple. Because I can, and I like to learn. The blog runs at home, of course, even though hosted blog services have existed for years now. I also run Nextcloud. It saves me the cost of extra storage on Dropbox and/or Google Cloud, I do it to my home network instead. Then there's plex, home assistant plus all the other services a geek would like to run. They all run in my K8s cluster now. I was served well by docker for years, but I wanted to test K8s, and that was the only reason I needed.
And a cluster? Well, it's called that in Kubernetes. But in reality, it's only one node. Because two would be overkill for home usage.
To sum it up:
I did it because
1) I wanted to 2) I could 3) I would learn something from it 4) It would give a modest benefit.
It's all the reason I needed to.
As for why automating things I could do it manually should it happen, see 1, 2, 3 and 4 above.
It's very common here (germany) to be forcefully disconnected and reassigned an IP from the pool once a day here, especially on DSL contracts - and we still got a lot of vectored to the limit DSL. It's in fact so common, that JDownloader has a client for some common routers built-in so it can automatically dodge one-click-hoster IP limits.
And our cable internet companies all use CGNAT by now, good luck getting anything through that.
So my ONT is on a UPS. We only get power outages once or twice a year, but I havent had my IP change in two years now.
And yeah, when it changes I just go into my DNS and update the records. It's like a five minute job. I could probably automate it if I cared to figure out Hover's API. It'd take probably a dozen resets to get a return on that time investment so I just haven't bothered.
I'm not sure if this is a regular feature of fiber networks or just that my ISP is nice. So much better than DOCSIS when I'd randomly get booted off and given a new address.
Yeah, I don't expect this to really change, either, in normal operations. But I'm a home user, I don't pay for a static IP address or a static IPV6 prefix.
But I'm a geek/hacker by heart, so curiosity sometimes get to me.
I did it because there was things to learn from it.
Newbies are better served starting with the simple stuff and then moving to the complex if needed
But yeah, I'd personally recommend Docker for self-hosting. Kubernetes or Proxmox always end up being too much to handle for personal use - or even small to medium sized companies.
I've been running a 2-node Proxmox cluster for about 3 years with close to no maintenance. What's too much about it?
It gives me easy VM and LXC management and very easy declarative networking and firewalling. That alone makes it worth it for me.
Yes they are complicated. Yes, they are still worth learning. And once you learn them they make a lot of sense to use when given the option.
But even in those 3 areas I haven't exhausted all the knowledge and features available.
So I'm just skeptical when someone is using k8s and hasn't mastered the fundamentals. How do they know whether they should be using a high level k8 feature or a low level os feature? Happened with docker a lot to me when I didn't know better, I was learning how to set memory limits on containers and reset policies instead of learning to do it on the OS level.
Of course I haven't mastered he fundamentals of K8s yet, I am just beginning to learn it.
Like you, I have been using docker for years, and to be honest it served me very well - and would probably continue to do so. But as a geek, I have a natural curiosity, sometimes just the curiosity of whether or not something can be done, and how it works, is enough justifications to do it.
As you probably read, this is in my home services. Which is where I am free to do exactly these experiments before I know the inns and outs of it.
Some of what I wrote in the blog post, I have already ditched. I am no longer hardcoding the ipv6 addresses from the pools, for example, but I still need to change my pool ip addresses if my ISP changes the range. I am not sure if my way is the correct one. I could do it manually, but what's the fun in that ?
There will be more blog posts as I learn, you be sure (there already is). I have more coming up, but one of my current projects will take a bit of time, as I have decided that writing a Unifi Operator for managing firewall rules is a better way to do it :)
Now, for home use? Whoever does that, write K8s Operators?
If you're on hacker news then you should understand that the goal isn't necessarily the result, it's the learning experience.
Except docker, on its own without something else in the stack, isn't Virtualization.
It’s _very_ different technology than virtualization.
You don’t need docker to make a container on Linux (or Solaris for that matter).
You are incorrect, this is OS-level virtualization:
"OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman)..."[0].
>it’s namespaces. Docker makes use of Linux kernel features; started out with cgroups and now uses libcontainer. Each container is running in its own isolated(ish) namespace on the same host kernel.
Yes, OS-level virtualization.
>It’s _very_ different technology than virtualization.
Incorrect, this is a virtualization technology.
>You don’t need docker to make a container on Linux (or Solaris for that matter).
No one claimed otherwise.
[0] https://en.m.wikipedia.org/wiki/OS-level_virtualization
You are incorrect, this is true:
"OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances, including containers (LXC, Solaris Containers, AIX WPARs, HP-UX SRP Containers, Docker, Podman)..."[0].
>you share your host kernel
Kernel != OS
>There are parts of the kernel that aren't namespaced as well. The kernel keyring is probably the big one.
Immaterial.
[0] https://en.m.wikipedia.org/wiki/OS-level_virtualization
"OS Virtualization" != "OS" "Virtualization"
https://papers.freebsd.org/2000/phk-jails/
https://youtu.be/hgN8pCMLI2U?si=CH-Fpyj16bEWDZzc
2nd containers go farther and virtualize network, and other resources.
I call it as it is.
>but absolutely no one considers chroot virtualization in any meaningful sense.
Absolutely everyone who's knowledgable in virtualization considers chroot to be a type of OS-level virtualization.
>Nothing is being virtualized, containers are just regular processes on the host system.
Wrong, "...OS-level virtualization is an operating system (OS) virtualization paradigm in which the kernel allows the existence of multiple isolated user space instances..."
"OS Virtualization" == "OS " + "Virtualization"
Anyway I agree that there no point in using k8s for home stuff. Single instance of anything should be sufficient for any needs like that.
On the other hand maybe someone just like to tinker with technology.
The goal wasn't necessarily the result.
I like to tinker.
And well...my K8s cluster is only one node so far, so there's limits to what I can play with.
And please: No comments that K8s is overkill. I know, and I don't care :) There's things to learn from it, and that's good enough reason for me.
Maybe the external ingress node can be a load balancer controlled by the k8 cluster. But then you still have to communicate with the home server and it has no exposed ip address
How so? You can just rent a cheap server to tunnel through, while having the benefits of your home machine(s) for compute.
> Maybe the external ingress node can be a load balancer controlled by the k8 cluster. But then you still have to communicate with the home server and it has no exposed ip address
Do you mean that you wouldn’t be able to access the K8s control plane endpoint then (which you could if configured properly)? Or something else?
SPOF
(@'ing because we reached the maximum reply limit)
Or just get your very own static IP.
It's a ZPOF.
Routing happens automatically on nearby router routes
It's deep down a matter of taste, you have a home server in Arizona and you route users to a Hetzner server in Germany and then back?
Don't justify, just recognize it's in bad taste, seek to use ip addresses as geographical host identifiers. Do not hide origin or destination. Minimize
Sarcasm obviously, but it's a fun exercise, especially if you get a real (v6, probably) net to announce.
My Unifi Cloud Gateway Max doesn't (yet) support BGP, but other Unifi devices do, so I do hope that it'll come to my device to. If not, I'll have to think of other ways to test it. But in the meantime, there's plenty of other stuffs to learn.
I have a real, ISP-routed IPv6 net to play with.
Kidding about building your own AnyCast network (although you really could…), but he.net tunnel broker is GOAT.
There's your problem. You need some form of cost attached to some identity assets, anything under the IANA umbrella, ips/domain names. This is in order to prevent sybil attacks. This is all well studied under he hashcash bitcoin era as PoW.
So yeah, you actually need to spend some money not in exchange of something here, but as the very thing you need, you need to distinguish yourself from those that spend 0$, not because they are cheap, but because they may do it 1000 times and ruin it your pooled reputation.
can you describe a bit more? I cannot connect the dots here on how terabytes are tied to free v6 tunnel - likely I'm missing some details. Thank you in advance.
- I get specs from an old laptop (that I had laying around anyway) that would probably cost like 50€/month to rent elsewhere. Power costs are much lower (iirc some 2€/month) and it just uses the internet subscription I already had anyway
- When I do hardware upgrades, I buy to own. No profit margin, dedicated hardware, exactly the parts I want
- All data is stored at home so I'm in control of access controls
- Gigabit transfer speeds at home, which is most of the time that I want to do bulk transfers
I see various advantages that still exist when you need to tunnel for IP-layer issues
Edit: got curious. At Hetzner, a shared cpu with 16GB RAM is 30€/month, but storage is way too little so an additional 53€/month just for a 1TB drive needs to be added (at that price, you could buy one of these drives new, every month again and again and still have enough money over to pay for the operating electricity; you'd have a fleet of 60 drives at the expected lifetime of 5 years, or even at triple redundancy you get 20TB for that price). I'll admit the uplink would be significantly better for this system, but then my downlink isn't faster than my uplink so at home I wouldn't even notice. Not sure how much of a difference a dedicated CPU would make
At AWS, I have to guess things like iops (I put in 0 to have a best-case comparison), bandwidth (I guessed 1TB outbound, probably some months is five times more, some months half). It says the cheapest EC2 instance with these specs, shared/virtual again mind you so no performance guarantees, is t4g.xlarge. With storage and traffic, this costs 301$/month which I guess is nearly the same in euros after conversion fees. If I generously pay 3 years up front, it's only 190$ monthly + 2'156$ up front, so across 3 years that's 250$/month (and I'm out of over 2 grand which has an expected average return of 270€ at the typical 4% in an all-world ETF — I could nearly fund the electricity costs of the old laptop just from the money I lose in interest while paying for AWS! Probably 100% if I bought a battery and solar panel to the value of 2150€)
I actually have more than 1TB storage but don't currently use all of it, so figured this is a fair comparison
The proxy I currently have at Hetzner costs me 4€/month, so I save many multiples of my current total cost (including the at-home costs) by self hosting.
This is a much preferable solution to me as there are no changes to external-dns resources when the public IP changes. Granted, i don’t run a dual stack setup.
I used to run a dynamic DNS service, but these days I prefer to do it myself towards the API of my DNS provider. External-DNS in k8s is pretty neat that way.
Changing the K8s resources automatically is not really a big deal. It's a fun exercise.
Thanks. If I was a company, I would probably be in control over when my IPv6 range changes. And if my ISP is any good (I just recently switched to it), my IPv6 network should stay the same.
The network range in a home setting is always given by your ISP, most likely with DHCPV6 prefix delegation, very rarely do you in a home setting dish out for a permanent IPv6 network range. Granted, most decent ISPs try to persist it, since there's no good reasons not to, and it's a strong recommendation from standardization bodies etc. But it's still just best effort, accidents happen, state get lost, and suddently you have a different network.
Sure, it's probably take me less than an hour to just change everything, but we are hackers here, so what's the fun in that? At least I gravitate towards perfecting things even beyong pure needs, just because I can. At work, I have to call it a day when it gives no more significant gain, at home I am free to think "this is fine, but can I actually do it better?". If the answer is yes, and you have the time, I'd say go for it. Some people like to watch cat videos on Youtube, I prefer to tinker with getting stuff to work. Sometimes it's useful, sometimes it's just for the fun of it.
I'm on my way to improve this, by the way. I plan to create a Unifi Networking Operator that can help me not only this, but to configure my Unifi Gateway and firewall rules through Kubernetes properties. It will be more logical to let my "dynamic IP" setup just change Kubernetes properties, and let the OPerator handling the Unifi Configuration of it.
Overkill? Hell, yes! Fun? For me, at least? Will I learn something? Yes, I will learn how to create a Kubernetes Operator!
Yeah, I'm a beginner in Kubernetes, but not in IT and sysadmining in general, I've got 30 years experience there. For now, Kubernetes is a just-for-fun project at home, but it's used to run my day-to-day home services, which makes it even more fun to improve it. We use Kubernetes where I work, but not in my area, it's not inconcieveable that my home-tinkering will be of benefit at work, some day.
And yes. I run a personal blog (in my Kubernetes cluster). I try to make it a bit educational, with more or less repeatable experiments for people to pick and chose from.
Some will be good, some will probably be a bad idea. But as long as there's learnings to be had, it's worth doing.
If you can't shell a buck or persuade your isp to reserve a static ip for you. Try to persuade their dhcp server.
https://datatracker.ietf.org/doc/html/rfc2131#section-3.5
And, again, if you can't handle fundamentals, drop the Google level tech. You are not that deep.
My ISP offers either a home contract, where the IP forcefully changes every 24h and I can't pay for static IP, or they offer business contracts for businesses with minimum 10 employees, but that contract requires proving you're a registered as business with the local chamber of commerce & have the tax paperwork.
Many people have no easy option to get static IP.
Wow, and I thought Comcast was customer hostile, this is just bonkers.
I've built custom dyndns scripts to automate everything away, so nowadays it's only a second of interruption (thanks to some DNS TTL trickery), but it's nonetheless really annoying to deal with.
It's caused (magnified by upstream requirements by the RIR due to ipv4 exhaustion and possibly reputation mgmt.
In order to get a block isps need to provide a plan on how they will put the ips to good use for the benefit of society.
Post author should read Beej's Guide to Network Programming and come back when that's comfortable for them.
I know my way around networking, I know that my networking equipment will always try to request the same IP address/IPV6 range. But I can't do shit with it if my ISP has lost my reservation and given that to someone else.
And as numerous other comments here have shown, some ISPs are incompetent bordering on malicious, but sometimes, they're the best you have. My ISP is pretty decent, but that doesn't keep me from playing with "what if"-scenarios and solve things that may not ever happen for me.
I learnt something from it - not the core networking stuff, there is nothing in that that was new to me, but I leart a lot about how to do things more dynamically in Kubernetes, and I had to dig into the APIs of my router to achieve what I needed.
However, when I write a blog post, I also try to assume not everyone has a deep understanding of the things. If someone read my blog post and just then learnt about IPv6 addressing, that's great! Then they too would have learnt some things, even if Kubernetes is not for them.
(although they'd be better served by just using dynamic DNS rather than this complexity)
And I do.
I run K3s, and it's not supported to change the internal ranges after creating the cluster.
On the ingress, i.e. with metalLB, it's different, of course. And that is what my solution is for.
I'm going to improve it in the coming weeks. It's not going to be any easier, though, it will just be more fun for me :) (I like to learn new stuff).
It's probably going to be a bit more robust and logical, though, not look as hacked-together than my current script. But unless I actually get around to making a plugin architecture for it, it's still going to assume you're using a Unifi gateway.
Back in the day we'd use free services like https://freedns.afraid.org/ on a cron to refresh the ip every so often.
I used afraid to refresh my dial up ip address, for my "hosting service" domain. The "hosting service" was an old tower pc living in the cabinet underneath a fish tank. Ops was a lot different back then...
Nowadays, if you're poking holes in your firewall and exposing your ip address to the world, you're doing it wrong. We've moved away from that model. There's no need to do that and expose yourself in those ways, when you can instead tunnel out. Cloudflare/argo tunnels, or tailscale tunnels, dial out from your service and don't expose your system directly to the open internet. Clourflare will even automagically set the dns for your domain to always route through that tunnel. Your isp allocated ip address is irrelevant, and nothing ever needs it because nothing ever routes to it. Your domain routes to a cf endpoint, and your system tunnels out to it, meeting in the middle. No open ports, no firewall rules, no NAT bs. Only downside is, you're relying on and trusting services like cf and tailscale.
I have an unusual dynamic ip situation I've been taking advantage of with a different system. Some years back I noticed T-Mobile phone lines allow inbound connections on ipv6 (T-Mobile Home Internet unfortunately does not). I have a small weather station I run on a rpi3b I can access anywhere by using ddclient on the pi with cloudflare api key and it sets the AAAA record which is proxied by cloudflare, I leave the A record blank. If any users on ipv4 try to visit the site, cloudflare proxies it for them, works pretty reliably.
HELL YEAH. This is exactly how ipv6 is meant to be implemented. They deserve some real praise and recognition for this.
Why did they make it so inconvenient?
> Yeah don't do that if you want to be a professional with pride in their craft
Why not?
>Yeah don't do that if you want to be a professional with pride in their craft
Why? All of the large DNS providers have DDNS functionality. Even AWS has documented methods for DDNS on Route53.
For example, my home internet provider offers them, but the connection is guaranteed to go down once a month for maintenance. So I think DDNS is strictly better in this situation.
How you do that depends on your level of expertise.
You use dyndns
My ISP gives me native IPv6. Granted, it hasn't changed yet. I am pretty sure it will, some day. Probably by an accident of mine or the ISP.