The bridge network

The Docker bridge network is the first implementation of the container network model that we're going to look at in detail. This network implementation is based on the Linux bridge. When the Docker daemon runs for the first time, it creates a Linux bridge and calls it docker0. This is the default behavior, and can be changed by changing the configuration. Docker then creates a network with this Linux bridge and calls the network bridge. All the containers that we create on a Docker host and that we do not explicitly bind to another network leads to Docker automatically attaching to this bridge network.

To verify that we indeed have a network called bridge of type bridge defined on our host, we can list all networks on the host with the following command:

$ docker network ls

This should provide an output similar to the following:

Listing of all Docker networks available by default

In your case, the IDs will be different, but the rest of the output should look the same. We do indeed have a first network called bridge using the driver bridge. The scope being local just means that this type of network is restricted to a single host and cannot span across multiple hosts. In a later chapter, we will also discuss other types of networks that have a global scope, meaning they can span whole clusters of hosts.

Now, let's look a little bit deeper into what this bridge network is all about. For this, we are going to use the Docker inspect command:

$ docker network inspect bridge

When executed, this outputs a big chunk of detailed information about the network in question. This information should look like the following:

Output generated when inspecting the Docker bridge network

We have already seen the ID, Name, Driver, and Scope values when we listed all the networks, so that is nothing new. But let's have a look at the IP address management (IPAM) block. IPAM is software that is used to track IP addresses that are used on a computer. The important part in the IPAM block is the Config node with its values for Subnet and Gateway. The subnet for the bridge network is defined by default as 172.17.0.0/16. This means that all containers attached to this network will get an IP address assigned by Docker that is taken from the given range, which is 172.17.0.2 to 172.17.255.255. The 172.17.0.1 address is reserved for the router of this network whose role in this type of network is taken by the Linux bridge. One can expect that the very first container that will be attached to this network by Docker will get the 172.17.0.2 address. All subsequent containers will get a higher number; the following image illustrates this fact:

The bridge network

In the preceding image, we can see the network namespace of the host, which includes the host's eth0 endpoint, which is typically a NIC if the Docker host runs on bare metal or a virtual NIC if the Docker host is a VM. All traffic to the host comes through eth0. The Linux bridge is responsible for the routing of the network traffic between the host's network and the subnet of the bridge network.

By default, only traffic from the egress  is allowed, and all ingress is blocked. What this means is that while containerized applications can reach the internet, they cannot be reached by any outside traffic. Each container attached to the network gets its own virtual ethernet (veth) connection with the bridge. This is illustrated in the following image:

Details of the bridge network

The preceding image shows us the world from the perspective of the host. We will explore how the situation looks from within a container later on in this section.

We are not limited to just the bridge network, as Docker allows us to define our own custom bridge networks. This is not just a feature that is nice to have, but it is a recommended best practice to not run all containers on the same network, but to use additional bridge networks to further isolate containers that have no need to communicate with each other. To create a custom bridge network called sample-net, use the following command:

$ docker network create --driver bridge sample-net

 If we do this, we can then inspect what subnet Docker has created for this new custom network as follows:

$ docker network inspect sample-net | grep Subnet

This returns the following value:

"Subnet": "172.18.0.0/16",

Evidently, Docker has just assigned the next free block of IP addresses to our new custom bridge network. If, for some reason, we want to specify our own subnet range when creating a network, we can do so by using the --subnet parameter:

$ docker network create --driver bridge --subnet "10.1.0.0/16" test-net

To avoid conflicts due to duplicate IP addresses, make sure you avoid creating networks with overlapping subnets.

Now that we have discussed what a bridge network is and how one can create a custom bridge network, we want to understand how we can attach containers to these networks. First, let's interactively run an Alpine container without specifying the network to be attached:

$ docker container run --name c1 -it --rm alpine:latest /bin/sh

In another Terminal window, let's inspect the c1 container:

$ docker container inspect c1

In the vast output, let's concentrate for a moment on the part that provides network-related information. It can be found under the NetworkSettings node. I have it listed in the following output:

Network settings section of the container metadata

In the preceding output, we can see that the container is indeed attached to the bridge network since the NetworkID is equal to 026e65..., which we can see from the preceding code is the ID of the bridge network. We can also see that the container got the IP address of 172.17.0.4 assigned as expected and that the gateway is at 172.17.0.1. Please note that the container also had a MacAddress associated with it. This is important as the Linux bridge uses the Mac address for routing.

So far, we have approached this from the outside of the container's network namespace. Now, let's see how the situation looks when we're not only inside the container, but inside the container's network namespace. Inside the c1 container, let's use the ip tool to inspect what's going on. Run the ip addr command and observe the output that is generated as follows:

Container namespace as seen by the IP tool

The interesting part of the preceding output is the number 19, the eth0 endpoint. The veth0 endpoint that the Linux bridge created outside of the container namespace is mapped to eth0 inside the container. Docker always maps the first endpoint of a container network namespace to eth0, as seen from inside the namespace. If the network namespace is attached to an additional network, then that endpoint will be mapped to eth1, and so on.

Since at this point we're not really interested in any endpoint other than eth0, we could have used a more specific variant of the command, which would have given us the following:

/ # ip addr show eth0
195: eth0@if196: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever

In the output, we can also see what MAC address (02:42:ac:11:00:02) and what IP (172.17.0.2) have been associated with this container network namespace by Docker.

We can also get some information about how requests are routed by using the ip route command:

/ # ip route
default via 172.17.0.1 dev eth0
172.17.0.0/16 dev eth0 scope link src 172.17.0.2

This output tells us that all traffic to the gateway at 172.17.0.1 is routed through the eth0 device.

Now, let's run another container called  c2 on the same network:

$ docker container run --name c2 -d alpine:latest ping 127.0.0.1

The c2 container will also be attached to the bridge network, since we have not specified any other network. Its IP address will be the next free one from the subnet, which is 172.17.0.3, as we can readily test:

$ docker container inspect --format "{{.NetworkSettings.IPAddress}}" c2
172.17.0.3

Now, we have two containers attached to the bridge network. We can try to inspect this network once again to find a list of all containers attached to it in the output.:

$ docker network inspect bridge

The information is found under the Containers node:

The containers section of the output of docker network inspect bridge

Once again, we have shortened the output to the essentials for readability.

Now, let's create two additional containers, c3 and c4, and attach them to the test-net. For this, we use the --network parameter:

$ docker container run --name c3 -d --network test-net \
alpine:latest ping 127.0.0.1
$ docker container run --name c4 -d --network test-net \
alpine:latest ping 127.0.0.1

Let's inspect network test-net and confirm that the containers c3 and c4 are indeed attached to it:

$ docker network inspect test-net

This will give us the following output for the Containers section:

Containers section of the command docker network inspect test-net

The next question we're going to ask ourselves is whether the two c3 and c4 containers can freely communicate with each other. To demonstrate that this is indeed the case, we can exec into the container c3:

$ docker container exec -it c3 /bin/sh

Once inside the container, we can try to ping container c4 by name and by IP address:

/ # ping c4
PING c4 (10.1.0.3): 56 data bytes
64 bytes from 10.1.0.3: seq=0 ttl=64 time=0.192 ms
64 bytes from 10.1.0.3: seq=1 ttl=64 time=0.148 ms
...

The following is the result of the ping using the IP address of the container c4:

/ # ping 10.1.0.3
PING 10.1.0.3 (10.1.0.3): 56 data bytes
64 bytes from 10.1.0.3: seq=0 ttl=64 time=0.200 ms
64 bytes from 10.1.0.3: seq=1 ttl=64 time=0.172 ms
...

The answer in both cases confirms to us that the communication between containers attached to the same network is working as expected. The fact that we can even use the name of the container we want to connect to shows us that the name resolution provided by the Docker DNS service works inside this network.

Now we want to make sure that the bridge and the test-net networks are firewalled from each other. To demonstrate this, we can try to ping the c2 container from the c3 container, either by its name or by its IP address:

/ # ping c2
ping: bad address 'c2'

The following is the result of the ping using the IP address of the target container c2 instead:

/ # ping 172.17.0.3
PING 172.17.0.3 (172.17.0.3): 56 data bytes
^C
--- 172.17.0.3 ping statistics ---
43 packets transmitted, 0 packets received, 100% packet loss

The preceding command remained hanging and I had to terminate the command with Ctrl+C. From the answer to pinging c2, we can also see that the name resolution does not work across networks. This is the expected behavior. Networks provide an extra layer of isolation, and thus security, to containers.

Earlier, we learned that a container can be attached to multiple networks. Let's attach a c5 container to the sample-net and test-net networks at the same time:

$ docker container run --name c5 -d \
--network sample-net \
--network test-net \
alpine:latest ping 127.0.0.1

We can then test that c5 is reachable from the c2 container similar to when we tested the same for containers c4 and c2. The result will show that the connection indeed works.

If we want to remove an existing network, we can use the docker network rm command, but note that one cannot accidentally delete a network that has containers attached to it:

$ docker network rm test-net
Error response from daemon: network test-net id 863192... has active endpoints

Before we continue, let's clean up and remove all containers:

$ docker container rm -f $(docker container ls -aq)

Then we remove the two custom networks that we created:

$ docker network rm sample-net
$ docker network rm test-net