← Back

How Docker Networking Actually Works — I Built It by Hand to Find Out

Mar 21, 2026

Most people who use Docker have no idea what happens under the hood when they type docker run. I didn’t either. I knew containers got their own IP, I knew they could talk to the internet, but I had no idea why or how. So I decided to build it from scratch — no Docker, just raw Linux commands — and now I’ll never look at a container the same way again.


The Problem Docker Is Solving

By default, every process on a Linux system shares the same network stack. Same interfaces, same routing table, same iptables rules. That works fine for a single server. But what if you want two containers on the same host that are completely isolated from each other? What if you want both of them listening on port 80 without conflicting?

That’s where network namespaces come in.

Linux can create multiple completely independent network stacks on a single host. Each namespace gets its own interfaces, its own routing table, its own iptables rules. A process inside a namespace has no idea other namespaces even exist.

This is why two Docker containers can both bind to port 80. They’re in separate namespaces — it’s like two separate machines. They don’t conflict for the same reason your laptop and your phone don’t conflict when they’re both running a web server.


The Three Building Blocks

Before we get to commands, you need to understand three things:

Network Namespace — an isolated network stack. Think of it as a container’s private universe for networking.

veth pair — a virtual ethernet cable with two ends. Whatever goes into one end comes out the other. You put one end inside the namespace and keep the other end on the host. Now they can talk.

Linux Bridge — a virtual switch. If you have multiple namespaces that all need to communicate, you plug one end of each veth pair into the bridge. The bridge becomes the middle man — like a switch in a real network rack.

For this walkthrough, I’m only building one namespace and connecting it to the internet. No bridge needed yet. The architecture looks like this:

[namespace: ns1] --veth pair-- [host] --NAT--> wlp1s0 --> 8.8.8.8

Building It Step by Step

Step 1 — Create the namespace

sudo ip netns add ns1
ip netns list

You should see ns1 listed. That’s your isolated network stack, currently empty and disconnected from everything.


Step 2 — Create the veth pair

sudo ip link add veth0 type veth peer name veth1
ip link show | grep veth

You’ll see both veth0 and veth1 sitting in the host namespace, both DOWN. Think of it as a cable you just manufactured — both ends are still sitting on your desk, not plugged into anything yet.


Step 3 — Move one end into the namespace

sudo ip link set veth1 netns ns1
ip link show | grep veth

Now only veth0 appears on the host. veth1 has disappeared — it’s inside ns1 now. One end of the cable is plugged into the “container.”


Step 4 — Assign IPs and bring interfaces up

On the host side:

sudo ip addr add 10.0.0.1/24 dev veth0
sudo ip link set veth0 up

Inside the namespace:

sudo ip netns exec ns1 ip addr add 10.0.0.2/24 dev veth1
sudo ip netns exec ns1 ip link set veth1 up
sudo ip netns exec ns1 ip link set lo up

ip netns exec ns1 is how you run any command inside the namespace. Think of it as SSHing into the container.

Verify it worked:

sudo ip netns exec ns1 ip addr show

Step 5 — Test the veth cable

sudo ip netns exec ns1 ping -c 3 10.0.0.1

This should work immediately. The namespace can reach the host via the veth pair. Cable confirmed working.


Step 6 — Add a default route inside the namespace

Right now ns1 can reach 10.0.0.1 but nothing else. There’s no default route — packets destined for 8.8.8.8 have nowhere to go.

The host at 10.0.0.1 needs to act as the gateway for ns1. This is the same relationship your machine has with your home router — your machine doesn’t know how to reach the internet directly, it just sends everything to 192.168.0.1 and lets the router handle it.

sudo ip netns exec ns1 ip route add default via 10.0.0.1

Verify:

sudo ip netns exec ns1 ip route

You’ll see:

default via 10.0.0.1 dev veth1
10.0.0.0/24 dev veth1 proto kernel scope link src 10.0.0.2

ns1 now knows to send all external traffic to 10.0.0.1 — the host. Now the host needs to actually do something useful with those packets.


Step 7 — Enable IP forwarding on the host

By default, Linux drops packets that arrive on one interface and need to leave from another. That’s a safe default for a regular machine — it’s not a router, so why forward traffic?

But our host needs to forward packets from veth0 out through wlp1s0 to the internet. We need to tell the kernel: forwarding is allowed.

sudo sysctl -w net.ipv4.ip_forward=1

This is a kernel parameter — you’re changing kernel behavior at runtime. Without this, packets from ns1 arrive at the host and get silently dropped.


Step 8 — Add the NAT/MASQUERADE rule

This is the step that everything depends on. Even with forwarding enabled, packets leaving with source IP 10.0.0.2 will fail. Why? Because 10.0.0.2 is a private IP that means nothing to the internet. When 8.8.8.8 tries to send a reply, it has no idea where 10.0.0.2 is. The reply never comes back.

We need to rewrite the source IP to the host’s real IP just before the packet exits the network interface. That’s what MASQUERADE does.

sudo iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -o wlp1s0 -j MASQUERADE

Breaking this down:

  • -t nat — we’re working on the NAT table
  • -A POSTROUTING — intercept packets after routing, just before they leave
  • -s 10.0.0.0/24 — match packets from our namespace subnet
  • -o wlp1s0 — going out through this interface (your actual outgoing interface)
  • -j MASQUERADE — rewrite the source IP to whatever IP wlp1s0 currently has

The kernel tracks the connection so it knows how to translate the reply back and deliver it into ns1. That mechanism is called conntrack — connection tracking.

Note: -o wlp1s0 must match your actual outgoing interface. Use ip route | grep default to find yours. This is the part that breaks in most tutorials — they use eth0 as a generic example and your packets go out the wrong door.


Step 9 — The moment of truth

sudo ip netns exec ns1 ping -c 3 8.8.8.8

It works. A fully isolated namespace is talking to Google’s DNS server.


What Docker Actually Does

When you run docker run, Docker automates exactly these steps:

  1. Creates a network namespace for the container
  2. Creates a veth pair
  3. Moves one end into the namespace
  4. Attaches the other end to the docker0 bridge
  5. Assigns an IP inside the container
  6. Sets up the MASQUERADE rule via iptables

Run ip link show on any machine running Docker and you’ll see docker0 (the bridge) and several vethXXXX interfaces (one for each running container). You’re looking at the raw primitives now. Docker didn’t invent any of this — it’s all Linux kernel networking, automated.


The Packet Journey — Full Picture

Here’s what happens when ns1 pings 8.8.8.8:

ns1 generates packet (src: 10.0.0.2, dst: 8.8.8.8)
        ↓
kernel checks: "is 8.8.8.8 mine?" → No
        ↓
kernel checks: "is ip_forward=1?" → Yes
        ↓
kernel checks routing table: "where does 8.8.8.8 go?" → wlp1s0
        ↓
iptables POSTROUTING: rewrite src from 10.0.0.2 → 192.168.0.147
        ↓
packet exits wlp1s0 to the internet
        ↓
8.8.8.8 replies to 192.168.0.147
        ↓
kernel conntrack translates reply back to 10.0.0.2
        ↓
reply delivered into ns1

Every step has a reason. Remove any one of them — it breaks at exactly that point.


Why This Matters

Understanding this puts you in a completely different category when troubleshooting container networking issues. When something breaks, you know exactly which layer to check:

  • Container can’t reach host? → veth pair, IP assignment
  • Container can reach host but not internet? → default route, ip_forward
  • Packets leaving but no reply? → MASQUERADE rule, wrong interface

Most engineers who only know docker run are stuck when this breaks. You’re not.


This post is part of my Linux Mastery Road to Cloud series — documenting real hands-on learning on a live Ubuntu server as I work toward a career in cloud and DevOps engineering.

Scroll to Top