Back to articles
blog — devops — zsh$ cat devops.md# Nov 20, 2025Talos makes Kubernetesboring (and that's good)
5 min read

SSH into nodes is an anti-pattern. Talos removes the temptation entirely.

KubernetesTalosDevOpsInfrastructure

Everyone claims to do GitOps. Infrastructure as code. Immutable deployments. Cattle, not pets

Then an incident happens and someone SSHs into a node to “check something real quick”

That quick check becomes a manual fix. The fix works. Nobody documents it. Three months later the node gets replaced and the problem comes back. Sound familiar?

Talos Linux takes a different approach: no SSH, no shell, no package manager. You can’t cheat even if you want to

The “it’s faster” illusion

SSH feels fast. You’re in, you check logs, you tweak a config, you’re out. Five minutes, problem solved

Except it’s not solved. You just hid it

Here’s what actually happens:

  • 5 minutes to SSH and fix
  • 30 minutes to figure out what you changed (if you remember)
  • 2 hours to document it properly (which you won’t)
  • 0 minutes updating your automation (because “it’s just this one node”)

The fix stays on that one node. Your config management doesn’t know about it. Your other nodes don’t have it. You’ve created a snowflake

Next incident on a different node? Same problem, same manual fix, same lack of documentation. Now you have two snowflakes

I’ve seen clusters where nobody dares reboot certain nodes because “they have special configs.” That’s not infrastructure. That’s a collection of problems waiting to happen

Declarative for real this time

Most teams say they’re declarative but their nodes tell a different story. Ansible ran once during setup, maybe runs again during “maintenance windows.” Between those runs? Manual changes accumulate

Talos flips this. The node’s entire state comes from a machine config file. That’s it. No runtime modifications, no package installs, no config file edits. The system is its config

machine:
  type: worker
  kubelet:
    extraArgs:
      rotate-server-certificates: true
  network:
    hostname: worker-01
    interfaces:
      - interface: eth0
        dhcp: true
  install:
    disk: /dev/sda
    image: ghcr.io/siderolabs/installer:v1.9.0

This config defines everything about the node. Apply it, and you get exactly this state. Apply it again, same state. Apply it to ten nodes, ten identical systems

No drift. No “but this node is different because…” No surprises

How do you even work without a shell?

This is the first question everyone asks. Fair enough, shells are useful

Talos replaces shell access with talosctl, a CLI that talks to the Talos API. Everything you’d do over SSH, you do through this instead

Check services:

talosctl services -n 10.0.0.5

Get logs:

talosctl logs kubelet -n 10.0.0.5

Kernel messages:

talosctl dmesg -n 10.0.0.5

System resources:

talosctl memory -n 10.0.0.5
talosctl cpu -n 10.0.0.5

The difference: these commands are read-only. You can observe everything but modify nothing directly. Changes go through the machine config

Feels limiting at first. Then you realize you’re not tempted to “just fix this one thing real quick” anymore. You fix it properly in the config, or you don’t fix it

Upgrades that don’t suck

Traditional node upgrades are a process. Update packages, hope nothing breaks, reboot, pray. Maybe you have automation for this. Maybe it works most of the time

Talos upgrades are atomic:

talosctl upgrade --image ghcr.io/siderolabs/installer:v1.9.1 -n 10.0.0.5

The entire OS gets replaced. Not patched, replaced. If something goes wrong, rollback is instant because the previous image is still there

This works because there’s nothing to preserve. No custom packages, no modified configs, no state outside what Kubernetes manages. The node is disposable by design

I upgrade Talos nodes the same way I upgrade containers: pull new image, replace, done. No maintenance windows, no “let’s hope apt doesn’t break anything”

The alternatives

Ubuntu/Debian: Full Linux distro means full attack surface. Package manager means drift potential. Shell access means someone will use it during incidents

Flatcar Container Linux: Better. Minimal, immutable-ish, container-focused. But you still have shell access, which means you can still cheat. And you will, during that 2 AM incident

Bottlerocket: AWS’s answer to this problem. Good design, similar philosophy to Talos. But it’s AWS-specific. If you’re multi-cloud or on-prem, not an option

Talos runs anywhere: bare metal, AWS, GCP, Azure, Hetzner, your home lab. Same OS, same tooling, same workflow

Learning curve vs operational gain

Talos has a learning curve. Maybe two days to get comfortable, a week to feel proficient. You need to understand the machine config format, the API model, the upgrade process

But here’s what you get:

  • No more “why is this node different”
  • No more config drift investigations
  • No more SSH access to audit
  • No more package update anxiety
  • Upgrades go from “scheduled maintenance” to “eh, I’ll do it now”

The time you spend learning Talos, you’ll save in the first month of not debugging node inconsistencies

When to avoid Talos

Exotic hardware: Talos supports a lot, but if you need specific kernel modules or drivers that aren’t included, you’ll have to build custom images. Possible but adds complexity

Team not ready: If your team relies on shell access for debugging, Talos will feel like working with one hand tied. You need decent observability first, logs and metrics accessible without node access

Legacy dependencies: Some workloads expect to modify the host. Old monitoring agents, certain storage drivers, anything that assumes it can write to the filesystem. These need rethinking before Talos makes sense

The point of boring

Boring infrastructure is good infrastructure. You don’t want excitement from your OS. You want it to run containers and stay out of the way

Talos is boring. Every node is identical. Upgrades are predictable. There’s nothing to debug at the OS level because there’s nothing custom at the OS level

Your Kubernetes cluster becomes the only thing that matters. The nodes underneath are interchangeable. Kill one, spin up another, identical state in minutes. That’s the cattle model actually implemented, not just talked about

The teams I’ve seen adopt Talos spend less time on node management and more time on actual application work. Not because Talos is magic, but because it removes the temptation to do things the quick way

You can’t SSH in to fix something. So you fix it properly. And properly fixed problems don’t come back

Enjoyed this article?

Let me know! A share is always appreciated.

About the author

Sofiane Djerbi

Sofiane Djerbi

Cloud & Kubernetes Architect, FinOps Expert. I help companies build scalable, secure, and cost-effective infrastructures.

Comments