Back to articles
blog — devops — zsh$ cat devops.md# Jun 15, 2022Docker finally explainedwithout the BS
4 min read

Containers aren't VMs. Here's what they actually are, with analogies that make sense.

DockerContainersDevOps

“It works on my machine”

If you’ve never heard this phrase, you either work alone or you’re lying. The app runs perfectly on your laptop, you push it, and it explodes in production. Different Python version. Missing library. Wrong config path. Classic

Docker fixes this by packaging your app with everything it needs to run. Same environment everywhere. Your laptop, your colleague’s laptop, staging, production. Identical

The shipping container analogy

Before standardized shipping containers, moving goods was chaos. Every ship had different cargo holds. Every port had different equipment. Loading a ship took weeks because workers had to figure out how to fit oddly-shaped cargo together

Then someone invented the standard shipping container. Same size everywhere. Ships designed for them. Ports equipped to handle them. Now you can move goods from Shanghai to Rotterdam without ever opening the box

Docker containers are the same idea for software. Package your app once, run it anywhere that has Docker. The host doesn’t care what’s inside. Python app? Java app? Weird legacy thing from 2008? Doesn’t matter. It’s a container, it runs

Container ship loaded with standardized containers

Containers aren’t VMs

This trips people up. Virtual machines emulate entire computers. They have their own kernel, their own OS, their own everything. Heavy, slow to start, resource-hungry

Containers share the host’s kernel. They’re just isolated processes with their own filesystem. Light, fast to start, efficient

Think of it this way: a VM is like renting an entire apartment. You get your own kitchen, bathroom, walls. A container is like having your own room in a shared house. You have privacy and your own stuff, but you share the plumbing and electricity

That’s why you can run 50 containers on a laptop but struggle with 5 VMs

What’s actually in a container

A container image has:

  • Your application code
  • Runtime (Python, Node, Java, whatever)
  • System libraries your app needs
  • Config files
  • Environment variables

Everything packaged in layers. Base layer might be Ubuntu minimal. Next layer adds Python. Next adds your dependencies. Top layer has your code

When you update your code, Docker only rebuilds the top layer. The rest comes from cache. Fast builds, small downloads

The basics

Pull an image:

docker pull python:3.11

Run a container:

docker run -it python:3.11 python

You’re now in a Python shell inside a container. Exit and it’s gone. Clean

Run something useful:

docker run -d -p 8080:80 nginx

That’s Nginx running in the background, port 80 inside the container mapped to 8080 on your machine. Visit localhost:8080, you’ll see the Nginx welcome page

List running containers:

docker ps

Stop one:

docker stop <container_id>

Building your own image

Here’s a simple Python app. Create a file called app.py:

from flask import Flask
app = Flask(__name__)

@app.route('/')
def hello():
    return 'Hello from Docker!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

Now create a Dockerfile:

FROM python:3.11-slim

WORKDIR /app

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY app.py .

EXPOSE 5000
CMD ["python", "app.py"]

And a requirements.txt:

flask

Build it:

docker build -t myapp .

Run it:

docker run -d -p 5000:5000 myapp

Visit localhost:5000. Your app is running in a container. Push this image to a registry, and anyone can run the exact same thing

The “but I can just use virtualenv” argument

Sure, virtualenv isolates Python packages. But it doesn’t isolate:

  • System libraries
  • The Python version itself
  • OS-level dependencies
  • File paths
  • Environment variables

Your app needs ImageMagick? Better hope the production server has the same version. It needs a specific OpenSSL? Good luck

Containers include all of this. The entire environment travels with your app

When Docker makes sense

Good use cases:

  • Consistent dev environments across a team
  • CI/CD pipelines
  • Deploying to any cloud provider
  • Running multiple services locally
  • Isolating dependencies between projects

Overkill:

  • Simple scripts you run once
  • Local tools that work fine natively
  • When you’re the only one who’ll ever run it

Don’t containerize everything just because you can. But for anything that needs to run somewhere else, containers save you from “works on my machine” forever

One last analogy

Think of a recipe vs a meal

A Dockerfile is a recipe. It lists ingredients (base image), preparation steps (RUN commands), and how to serve (CMD)

An image is a frozen meal. Made once, stored, can be reheated anywhere

A container is the meal being eaten. Active, doing its job, gone when finished

You write the recipe once. Docker builds the frozen meal. Run it whenever you need it, wherever you need it

That’s it. Containers aren’t magic. They’re just a really good way to package software so it runs the same everywhere. Once you get that, the rest is just commands and config

Enjoyed this article?

Let me know! A share is always appreciated.

About the author

Sofiane Djerbi

Sofiane Djerbi

Cloud & Kubernetes Architect, FinOps Expert. I help companies build scalable, secure, and cost-effective infrastructures.

Comments