In a few days Linode will force the upgrade to v1.27 and I almost arrived late to the game.

Upgrades are normally painless after the usual due diligence in checking release notes and upgrade procedures (remember removing PodSecurityPolicy?) and it’s a matter of upgrading the control plane and then recycle all worker nodes.

After a couple of clicks I enjoyed the “pod dance” across the nodes and all my services restarted in seconds (including external-dns re-registering the new IP of the LoadBalancer1).

The only strange thing was the long time it took (~10m) for the nodes to be available for scheduling… Since this is a cluster for experiments I am happy with the result and I cannot stress enough how proper automation (in this case the standard Kubernetes orchestration with a bit of help from Flux) is powerful and a must have in any infrastructure (and if you don’t use Kubernetes please do yourself a favor and use proper IaC and configuration management2) to achieve (almost) the same result in an “old” environment.

2024-02-23 update: upgrading to 1.28 was a breeze and I opted to have at least two different node groups to address the availability and cost problems (duh).


  1. this is a side-effect of cost-control resulting in a small node pool. ↩︎

  2. or immutable images ↩︎