Stop scripting Helm deploys, use Helmfile
Managing multiple Helm releases is messy. Helmfile fixes it with declarative configuration and real environments.
Helm alone doesn’t scale past three releases. You end up with bash scripts calling helm upgrade, values files everywhere, and no clear picture of what’s running where
Helmfile solves this. It’s a declarative layer over Helm that manages multiple releases across environments. Think of it as Terraform for Helm charts
The Problem with Pure Helm
Managing production looks like this:
helm upgrade --install prometheus prometheus-community/prometheus \
-f values/prod/prometheus.yaml \
-f values/prod/secrets.yaml \
--set server.retention=30d
helm upgrade --install grafana grafana/grafana \
-f values/prod/grafana.yaml \
--set adminPassword=$GRAFANA_PASSWORD
# ... 15 more releases You script this. Then you need staging. Then dev. Soon you’re maintaining shell scripts that diverge between environments and break during rollbacks
What Helmfile Actually Does
Single file defines everything:
repositories:
- name: prometheus-community
url: https://prometheus-community.github.io/helm-charts
releases:
- name: prometheus
namespace: monitoring
chart: prometheus-community/prometheus
values:
- values/prometheus.yaml
- values/{{ .Environment.Name }}/prometheus.yaml
set:
- name: server.retention
value: 30d
- name: grafana
namespace: monitoring
chart: grafana/grafana
values:
- values/grafana.yaml
secrets:
- values/{{ .Environment.Name }}/secrets.yaml Run helmfile sync and it deploys everything. Idempotent, predictable, version-controlled
Environments That Work
Define once, deploy everywhere:
environments:
dev:
values:
- environments/dev/values.yaml
staging:
values:
- environments/staging/values.yaml
prod:
values:
- environments/prod/values.yaml
releases:
- name: api
chart: ./charts/api
values:
- values/api.yaml
- values/{{ .Environment.Name }}/api.yaml Deploy to staging:
helmfile -e staging sync Same command works for dev and prod. No conditional logic in scripts, no copying files around
Secrets Without Pain
Helmfile integrates with helm-secrets using SOPS:
releases:
- name: app
chart: ./charts/app
secrets:
- secrets://environments/{{ .Environment.Name }}/secrets.yaml Encrypt secrets with SOPS, commit them. Helmfile decrypts on deploy. No plaintext passwords in git, no manual key juggling
Setup takes five minutes:
# Install SOPS and age
brew install sops age
# Generate key
age-keygen -o key.txt
# Configure SOPS
cat > .sops.yaml <<EOF
creation_rules:
- path_regex: secrets/.*\.yaml$
age: age1ql3z7hjy54pw3hyww5ayyfg7zqgvc7w3j2elw8zmrj2kg5sfn9aqmcac8p
EOF
# Encrypt file
sops -e secrets/prod/secrets.yaml > secrets/prod/secrets.enc.yaml Dependency Management
Some releases need others first:
releases:
- name: cert-manager
namespace: cert-manager
chart: jetstack/cert-manager
- name: ingress-nginx
namespace: ingress
chart: ingress-nginx/ingress-nginx
needs:
- cert-manager
- name: api
namespace: default
chart: ./charts/api
needs:
- ingress-nginx Helmfile deploys in order. Cert-manager first, then ingress, then apps. Handles failures correctly, stops if cert-manager fails
Diff Before Deploy
Never deploy blind:
helmfile diff Shows exactly what changes. Same workflow as Terraform. I caught a production typo this way that would have taken down authentication
# Review changes
helmfile diff
# Apply if good
helmfile apply The apply command shows diff then asks for confirmation. Use sync for CI/CD pipelines
Templating When You Need It
Go templates work across the whole file:
{{ $domain := .Values.domain }}
releases:
- name: api
chart: ./charts/api
set:
- name: ingress.host
value: api.{{ $domain }}
- name: web
chart: ./charts/web
set:
- name: ingress.host
value: {{ $domain }} Values come from environment files:
# environments/prod/values.yaml
domain: example.com
replicas: 5 # environments/dev/values.yaml
domain: dev.example.com
replicas: 1 Real-World Structure
This is what works after managing 50+ releases:
helmfile.yaml # Main file
environments/
dev/
values.yaml
secrets.yaml
staging/
values.yaml
secrets.yaml
prod/
values.yaml
secrets.yaml
values/
prometheus.yaml # Shared config
grafana.yaml
api.yaml
charts/ # Local charts
api/
worker/ Keep shared config in values/, environment-specific stuff in environments/. Local charts in charts/, external ones referenced from repos
Commands You’ll Use
# Deploy everything
helmfile sync
# Deploy one release
helmfile -l name=prometheus sync
# Show what would change
helmfile diff
# Interactive apply
helmfile apply
# Update charts
helmfile deps
# Destroy everything
helmfile destroy Label selector (-l) is crucial. Deploy just monitoring stack:
helmfile -l tier=monitoring sync Add labels in helmfile.yaml:
releases:
- name: prometheus
labels:
tier: monitoring
chart: prometheus-community/prometheus Migration Strategy
Don’t rewrite everything. Move one release at a time:
- Pick simple release (monitoring, not core app)
- Add to helmfile.yaml
- Run
helmfile diff, should show no changes - Delete old helm command
- Repeat
I migrated 30 releases over two weeks. Each took 10 minutes. Zero downtime
When Not to Use Helmfile
Skip it if:
- Running less than 3 releases
- No multiple environments
- Helm charts are simple and never change
Also skip if you’re already on ArgoCD or Flux. Those handle the same problems differently. Helmfile is for when you want explicit, command-line deploys, not GitOps
Common Issues
Values precedence: Later files override earlier ones. Put environment-specific values last:
values:
- values/base.yaml # Overridden by
- values/{{ .Environment.Name }}/values.yaml State management: Helmfile tracks releases in Helm’s state. If you manually helm delete something, Helmfile doesn’t know. Use helmfile delete instead
Slow diffs: With 20+ releases, helmfile diff takes time. Use labels to target subsets:
helmfile -l app=api diff The Result
I used to maintain a 200-line bash script that broke on errors, diverged across environments, and made deploys scary. Now I run one command that handles everything. Diff shows changes before they happen. Secrets stay encrypted in git. New developers get up to speed in an hour
Less time managing deployments means more time building features
Reality is often more nuanced. But me? Nuance bores me. I'd rather be clear.