Once you install Docker in Linux, a ‘default’ networking configuration is applied. Here is what happens…
Docker adds a bridge to the Linux OS named ‘docker0
‘ and that bridge is an isolated network defined in software.
slade@linux-home:/etc/iptables$ ifconfig docker0
docker0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
inet6 fe80::42:e5ff:fef0:dbc0 prefixlen 64 scopeid 0x20<link>
ether 02:42:e5:f0:db:c0 txqueuelen 0 (Ethernet)
RX packets 17842 bytes 2257624 (2.2 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 24480 bytes 174222061 (174.2 MB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Your Linux host running the Docker engine can see this interface, and machines attached to your LAN are able to access ports you’ve exposed on your containers via iptables NAT functionality.
Docker will also make the required iptables changes such that traffic arriving at the primary interface on any listening port attached to a Docker container is then forwarded to that Docker container via the bridge it’s attached to. Also, any outbound traffic leaving a container is forwarded via the bridge to the primary network interface on the host.
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
Examples of a few rules that were created in response to Docker containers spinning up
-A DOCKER -d 172.17.0.2/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 5432 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8444 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8443 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8001 -j ACCEPT
-A DOCKER -d 172.17.0.3/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8000 -j ACCEPT
-A DOCKER -d 172.17.0.4/32 ! -i docker0 -o docker0 -p tcp -m tcp --dport 8080 -j ACCEPT
Note that everything revolves around the docker0
bridge.
This is all good, but the default bridge has limitations to be aware of and it is recommended to build your own user-defined bridge to avoid them.
Configure your networking manually…
Add a Docker bridge
docker network create -d bridge --subnet 172.172.0.1/16 docker1
Check your Docker networking
slade@linux-home:~$ docker network ls
NETWORK ID NAME DRIVER SCOPE
69ce3fd4683d bridge bridge local
d1c6e30494d3 docker1 bridge local
7f7e80588cd4 host host local
5a9009966282 none null local
Show your Linux bridges
slade@linux-home:~$ brctl show
bridge name bridge id STP enabled interfaces
br-d1c6e30494d3 8000.02429b725669 no
docker0 8000.0242edf05746 no veth76b8a00
Note that the Docker output shows a bridge with a network ID ending in 94d3
and that same ID appears in the output of brctl show
.
Now check your Linux interfaces again.
slade@linux-home:~$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 60:a4:4c:53:3e:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10/24 brd 10.0.0.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::62a4:4cff:fe53:3e18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ed:f0:57:46 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:edff:fef0:5746/64 scope link
valid_lft forever preferred_lft forever
17: br-d1c6e30494d3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:9b:72:56:69 brd ff:ff:ff:ff:ff:ff
inet 172.172.0.1/16 brd 172.172.255.255 scope global br-d1c6e30494d3
valid_lft forever preferred_lft forever
You can see the bridge with id br-d1c6e30494d3
has the IP we assigned when we created the Docker bridge using Docker cli; however, it does show a state of DOWN
.
Now you can run containers and attach them to this network. Start a container and run it on the new docker1
bridge
docker run -d --name kong-database --network docker1 \
-p 5432:5432 \
-e "POSTGRES_USER=kong" \
-e "POSTGRES_PASSWORD=secretface" \
-e "POSTGRES_DB=kong" \
postgres:9.6
Check your Linux interfaces again
slade@linux-home:~/kong$ ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 60:a4:4c:53:3e:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10/24 brd 10.0.0.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::62a4:4cff:fe53:3e18/64 scope link
valid_lft forever preferred_lft forever
3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:ed:f0:57:46 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
valid_lft forever preferred_lft forever
inet6 fe80::42:edff:fef0:5746/64 scope link
valid_lft forever preferred_lft forever
17: br-d1c6e30494d3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:9b:72:56:69 brd ff:ff:ff:ff:ff:ff
inet 172.172.0.1/16 brd 172.172.255.255 scope global br-d1c6e30494d3
valid_lft forever preferred_lft forever
inet6 fe80::42:9bff:fe72:5669/64 scope link
valid_lft forever preferred_lft forever
The bridge with id br-d1c6e30494d3
is UP
.
Continue running a few containers, attaching them to the new bridge.
docker run --rm --network docker1 \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_USER=kong" \
-e "KONG_PG_PASSWORD=secretface" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
kong:0.15.0 kong migrations bootstrap
docker run -d --name kong --network docker1 \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_USER=kong" \
-e "KONG_PG_PASSWORD=secretface" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001, 0.0.0.0:8444 ssl" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong:0.15.0
You should now see these containers running
slade@linux-home:~/kong$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
a11bacb99169 pgbi/kong-dashboard "./docker/entrypoint…" 18 minutes ago Up 18 minutes 0.0.0.0:8080->8080/tcp hopeful_bose
068d82543001 kong:0.15.0 "/docker-entrypoint.…" 19 minutes ago Up 19 minutes 0.0.0.0:8000-8001->8000-8001/tcp, 0.0.0.0:8443-8444->8443-8444/tcp kong
46f19f0da69b postgres:9.6 "docker-entrypoint.s…" 26 minutes ago Up 26 minutes 0.0.0.0:5432->5432/tcp kong-database
Notice that iptables has been updated automatically. As with the rules that were created for the default docker0
bridge, we now have those same rules on our br-d1c6e30494d3
bridge.
-A FORWARD -o br-d1c6e30494d3 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o br-d1c6e30494d3 -j DOCKER
-A FORWARD -i br-d1c6e30494d3 ! -o br-d1c6e30494d3 -j ACCEPT
-A FORWARD -i br-d1c6e30494d3 -o br-d1c6e30494d3 -j ACCEPT
-A FORWARD -o docker0 -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A FORWARD -o docker0 -j DOCKER
-A FORWARD -i docker0 ! -o docker0 -j ACCEPT
-A FORWARD -i docker0 -o docker0 -j ACCEPT
And these
-A DOCKER -d 172.172.0.2/32 ! -i br-d1c6e30494d3 -o br-d1c6e30494d3 -p tcp -m tcp --dport 5432 -j ACCEPT
-A DOCKER -d 172.172.0.3/32 ! -i br-d1c6e30494d3 -o br-d1c6e30494d3 -p tcp -m tcp --dport 8444 -j ACCEPT
-A DOCKER -d 172.172.0.3/32 ! -i br-d1c6e30494d3 -o br-d1c6e30494d3 -p tcp -m tcp --dport 8443 -j ACCEPT
-A DOCKER -d 172.172.0.3/32 ! -i br-d1c6e30494d3 -o br-d1c6e30494d3 -p tcp -m tcp --dport 8001 -j ACCEPT
-A DOCKER -d 172.172.0.3/32 ! -i br-d1c6e30494d3 -o br-d1c6e30494d3 -p tcp -m tcp --dport 8000 -j ACCEPT
-A DOCKER -d 172.172.0.4/32 ! -i br-d1c6e30494d3 -o br-d1c6e30494d3 -p tcp -m tcp --dport 8080 -j ACCEPT
-A DOCKER-ISOLATION-STAGE-1 -i br-d1c6e30494d3 ! -o br-d1c6e30494d3 -j DOCKER-ISOLATION-STAGE-2
You should now be able to access these running containers from machines on your LAN.
An aside… I ran in to an issue where Linux persisted the bridge br-d1c6e30494d3
on a reboot, but Docker did not persist the docker1
bridge. When I re-created the docker1
bridge, it then created a new Linux bridge that directly overlapped br-d1c6e30494d3
.
slade@linux-home:~/kong$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
link/ether 60:a4:4c:53:3e:18 brd ff:ff:ff:ff:ff:ff
inet 10.0.0.10/24 brd 10.0.0.255 scope global eno1
valid_lft forever preferred_lft forever
inet6 fe80::62a4:4cff:fe53:3e18/64 scope link
valid_lft forever preferred_lft forever
8: br-d1c6e30494d3: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default
link/ether 02:42:56:59:40:fe brd ff:ff:ff:ff:ff:ff
inet 172.172.0.1/16 brd 172.172.255.255 scope global br-d1c6e30494d3
valid_lft forever preferred_lft forever
25: br-cbe891bbef2b: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:89:98:23:18 brd ff:ff:ff:ff:ff:ff
inet 172.172.0.1/16 brd 172.172.255.255 scope global br-cbe891bbef2b
valid_lft forever preferred_lft forever
inet6 fe80::42:89ff:fe98:2318/64 scope link
valid_lft forever preferred_lft forever
The fix was to remove bridge br-d1c6e30494d3
. This resulted in containers attaching to the new Docker bridge, but hosts on the network not being able to actually reach those containers.
slade@linux-home:~/kong$ sudo ip link set br-d1c6e30494d3 down
slade@linux-home:~/kong$ sudo brctl delbr br-d1c6e30494d3
Problem solved.