Back to articles
blog — devops — zsh$ cat devops.md# Mar 18, 2025Google kills ContainerRegistry and nobody issurprised
6 min read

GCR shuts down March 18, 2025. Google forces everyone to Artifact Registry. Another product, another forced migration to the graveyard.

DockerGCPDevOpsCloud

Google Container Registry stops accepting pushes today. March 18, 2025. If your CI/CD is still pushing to gcr.io, it just broke.

This isn’t a surprise. Google announced it in May 2023. Almost two years of warning. “Migrate to Artifact Registry,” they said. “Better features,” they promised. “More unified experience.”

Translation: we built a new thing, now move your stuff or it disappears.

Why GCR is dying

Google’s official reason? Artifact Registry is “more modern” and “supports more artifact types.” It handles Docker images, Helm charts, npm packages, Maven artifacts, all in one place.

The real reason? GCR worked fine, but it wasn’t part of their grand unified vision. So they built Artifact Registry, waited until enough people migrated, and now they’re killing the old thing.

This is the Google playbook:

  1. Build something new
  2. Get people dependent on it
  3. Build something newer
  4. Force migration
  5. Repeat every 3-5 years

Remember that Container Registry was itself a replacement for earlier Docker hosting solutions. Now it’s being replaced. What replaces Artifact Registry in 2028? Place your bets.

What happens now

Today, March 18, 2025: No more pushes to gcr.io. Your CI/CD pipeline pushing images? Dead. Your automated builds? Broken.

June 3, 2025: Pulls stop working too. Can’t pull images anymore. Nothing.

October 14, 2025: Final deadline. Any images you didn’t migrate? Gone forever.

Google gave you almost two years notice. They also gave you three different deadlines to keep track of, a migration tool that requires yet another gcloud component, and zero compensation for your engineering time.

The forced march to Artifact Registry

Here’s what you need to do if you haven’t already:

1. Create Artifact Registry repositories

gcloud artifacts repositories create my-docker-repo \
  --repository-format=docker \
  --location=us-central1 \
  --description="Forced migration from GCR"

The URL format changes. It’s not gcr.io/project-id/image anymore:

# Old GCR format
gcr.io/my-project/my-app:v1.2.3

# New Artifact Registry format
us-central1-docker.pkg.dev/my-project/my-repo/my-app:v1.2.3

More characters to type. Region required in the URL. More fun for everyone.

2. Copy your images

Google provides gcrane for bulk copying:

gcrane cp -r gcr.io/your-project \
  us-central1-docker.pkg.dev/your-project/my-repo

Works fine for 10 images. Got 500 images across multiple projects and regions? Block out your week.

3. Update every single reference

Your Kubernetes manifests:

# Find and replace everywhere
image: gcr.io/my-project/my-app:v1.2.3
# Becomes
image: us-central1-docker.pkg.dev/my-project/my-repo/my-app:v1.2.3

Your CI/CD pipelines. Your Dockerfiles. Your Helm charts. Your deployment scripts. Your runbooks. Your documentation. Every hardcoded gcr.io reference across your entire stack.

Miss one? That’s a production incident at 3 AM.

4. Update authentication everywhere

gcloud auth configure-docker us-central1-docker.pkg.dev

Every developer machine. Every CI runner. Every deployment agent. Every place that pulls images.

Someone will forget. Something will break.

5. Test everything

Because you absolutely will miss something.

The cost surprise nobody talks about

Artifact Registry is more expensive. Significantly.

Container Registry: $0.026 per GB per month (Google Cloud Storage multi-regional pricing)

Artifact Registry: $0.10 per GB per month

That’s almost 4x more expensive for storage. Same images, same data, quadruple the cost.

“But Artifact Registry has regional repositories!” Sure. If you co-locate with your compute, you save on egress. That might offset the storage cost increase. Or it might not.

Either way, your bill is going up. Not because you’re storing more. Not because you’re using more features. Because Google said so.

Calculate your current GCR storage. Multiply by 4. That’s your new baseline. Budget accordingly.

This is the Google Cloud problem

Google Cloud has excellent technology. The products work well. The infrastructure is solid. Then they kill it.

The Google graveyard is littered with products people built businesses on:

  • Google Reader
  • Google Code
  • Inbox
  • Cloud IoT Core
  • Cloud Functions Gen 1
  • Container Registry

Every 2-3 years, something you built your infrastructure around gets deprecated. You get 12-24 months notice. You scramble to migrate. You rewrite everything. You update every reference. You retrain your team.

Then in 3 years, they deprecate the replacement.

This isn’t innovation. It’s churn for the sake of churn.

The AWS comparison nobody wants to hear

AWS still supports EC2 APIs from 2006. Nineteen years old. Still working. Still supported.

They don’t force you to migrate from EC2 Classic to VPC to EC2-NextGen every few years. They add new features, deprecate truly obsolete things, but they don’t make you rewrite your infrastructure every three years because a product manager decided it didn’t fit the new roadmap.

Azure is similar. Once something ships, it stays.

Google treats infrastructure like consumer products. Ship fast, iterate, kill when something shinier comes along. That works for apps. It catastrophically fails for production infrastructure that needs to run for a decade.

Your database shouldn’t require a major migration every three years. Your container registry shouldn’t either.

What to do

If you’re on GCP: Migrate to Artifact Registry immediately. You have until June 3 before pulls stop working. Budget extra engineering time. Budget higher costs. Update your disaster recovery plans to account for Google deprecating Artifact Registry in 2027.

If you’re choosing a cloud: Remember this moment. When you’re evaluating GCP vs AWS vs Azure, factor in the migration tax. Every 3 years, budget 2-3 months of engineering time to migrate to whatever Google decides is the new hotness.

If you’re multi-cloud: Keep your container registry somewhere stable. AWS ECR has been around since 2015 with zero forced migrations. Docker Hub isn’t going anywhere. Harbor is self-hosted. Pick something that won’t disappear because a PM changed the roadmap.

The actual lesson

Your infrastructure shouldn’t require major rewrites every 3 years because a vendor changed strategy.

Container Registry worked. It was simple, fast, and cheap. It did exactly one thing: store container images. It did it well.

But it wasn’t “unified” enough, wasn’t “modern” enough, didn’t fit the new Artifact vision. So now it’s dead.

Mark your calendars for 2028. That’s when they’ll announce Artifact Registry 2.0 and deprecate the current version. You read it here first.

Migrate today. You have until June 3 before your deployments break. October 14 before your images vanish.

Container Registry, May 2015 - March 2025. Another victim of Google’s inability to maintain products long-term. Rest in peace alongside the other 200+ services in the graveyard.

At least they gave you 77 days between “pushes stop” and “pulls stop” as a grace period. How generous.

Update: Talos Linux kept using GCR 8 months after shutdown

November 6, 2025: Talos v1.11.5 released. Still uses gcr.io/etcd-development/etcd:v3.6.5

That’s right. Eight months after GCR stopped accepting pulls, Talos shipped a version that depends on it

The timeline is absurd:

  • March 18, 2025: GCR stops accepting pushes
  • June 3, 2025: GCR stops accepting pulls
  • November 6, 2025: Talos v1.11.5 released, tries to pull from gcr.io

How do you deploy a Kubernetes distro that pulls from a registry that’s been dead for 5 months?

The fix: v1.12.0-beta.0 finally switches to registry.k8s.io/etcd

If you’re stuck on v1.11.5, patch it yourself:

cluster:
  etcd:
    image: registry.k8s.io/etcd:v3.5.16  # or quay.io/coreos/etcd:v3.5.16

This is what happens when project dependencies aren’t updated. GCR announced the shutdown in May 2023. Talos had two years to fix this. They shipped a broken release 8 months after the deadline

Upgrade to v1.12.0-beta.0 or patch your config. Don’t wait for a stable release that might still reference dead registries

Enjoyed this article?

Let me know! A share is always appreciated.

About the author

Sofiane Djerbi

Sofiane Djerbi

Cloud & Kubernetes Architect, FinOps Expert. I help companies build scalable, secure, and cost-effective infrastructures.

Comments