- The DevOps 2.1 Toolkit:Docker Swarm
- Viktor Farcic
- 916字
- 2021-07-09 21:03:39
Running a database in isolation
We can isolate a database service by not exposing its ports. That can be accomplished easily with the service create command:
docker service create --name go-demo-db \
mongo:3.2.10
We can confirm that the ports are indeed not exposed by inspecting the service:
docker service inspect --pretty go-demo-db
The output is as follows:
ID: rcedo70r2f1njpm0eyb3nwf8w
Name: go-demo-db
Service Mode: Replicated
Replicas: 1
Placement:
UpdateConfig:
Parallelism: 1
On failure: pause
Max failure ratio: 0
ContainerSpec:
Image: mongo:3.2.10@sha256:532a19da83ee0e4e2a2ec6bc4212fc4af\
26357c040675d5c2629a4e4c4563cef
Resources:
Endpoint Mode: vip
As you can see, there is no mention of any port. Our go-demo-db service is fully isolated and inaccessible to anyone. However, that is too much isolation. We want the service to be isolated from anything but the service it belongs to go-demo. We can accomplish that through the usage of Docker Swarm networking.
Let us remove the service we created and start over:
docker service rm go-demo-db
This time, we should create a network and make sure that the go-demo-db service is attached to it:
docker network create --driver overlay go-demo
docker service create --name go-demo-db \
--network go-demo \
mongo:3.2.10
We created an overlay network called go-demo followed with the go-demo-db service. This time, we used the --network argument to attach the service to the network. From this moment on, all services attached to the go-demo network will be accessible to each other.
Let's inspect the service and confirm that it is indeed attached to the network:
docker service inspect --pretty go-demo-db
The output of the service inspect command is as follows:
ID: ktrxcgp3gtszsjvi7xg0hmd73
Name: go-demo-db
Service Mode: Replicated
Replicas: 1
Placement:
UpdateConfig:
Parallelism: 1
On failure: pause
Max failure ratio: 0
ContainerSpec:
Image: mongo:3.2.10@sha256:532a19da83ee0e4e2a2ec6bc4212fc4af26357c040675d
5c2629a4e4c4563cef
Resources:
Networks: go-demo
Endpoint Mode: vip
As you can see, this time, there is a Networks entry with the value set to the ID of the go-demo network we created earlier.
Let us confirm that networking truly works. To prove it, we'll create a global service called util:
docker service create --name util \
--network go-demo --mode global \
alpine sleep 1000000000
Just as go-demo-db, the util service also has the go-demo network attached.
A new argument is --mode. When set to global, the service will run on every node of the cluster. That is a very useful feature when we want to set up infrastructure services that should span the whole cluster.
We can confirm that it is running everywhere by executing the service ps command:
docker service ps util
The output is as follows (IDs and ERROR PORTS columns are removed for brevity):
NAME IMAGE NODE DESIRED STATE CURRENT STATE
util... alpine:latest node-1 Running Running 6 minutes ago
util... alpine:latest node-3 Running Running 6 minutes ago
util... alpine:latest node-2 Running Running 6 minutes ago
As you can see, the util service is running on all three nodes.
We are running the alpine image (a minuscule Linux distribution). We put it to sleep for a very long time. Otherwise, since no processes are running, the service would stop, Swarm would restart it, it would stop again, and so on.
The purpose of the util service will be to demonstrate some of the concepts we're exploring. We'll exec into it and confirm that the networking truly works.
To enter the util container, we need to find out the ID of the instance running on the node-1 (the node our local Docker is pointing to):
ID=$(docker ps -q --filter label=com.docker.swarm.service.name=util)
We listed all the processes ps in quiet mode so that only IDs are returned -q, and limited the result to the service name util:
--filter label=com.docker.swarm.service.name=util
The result is stored as the environment variable ID.
We'll install a tool called drill. It is a tool designed to get all sorts of information out of a DNS and it will come in handy very soon:
docker exec -it $ID apk add --update drill
Alpine Linux uses the package management called apk, so we told it to add drill.
Now we can see whether networking truly works. Since both go-demo-db and util services belong to the same network, they should be able to communicate with each other using DNS names. Whenever we attach a service to the network, a new virtual IP is created together with a DNS that matches the name of the services.
Let's try it out as follows:
docker exec -it $ID drill go-demo-db
We entered into one of the instances of the util service and "drilled" the DNS go-demo-db. The output is as follows:
;; ->>HEADER<<- opcode: QUERY, rcode: NOERROR, id: 5751
;; flags: qr rd ra ; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;; go-demo-db. IN A
;; ANSWER SECTION:
go-demo-db. 600 IN A 10.0.0.2
;; AUTHORITY SECTION:
;; ADDITIONAL SECTION:
;; Query time: 0 msec
;; SERVER: 127.0.0.11
;; WHEN: Thu Sep 1 12:53:42 2016
;; MSG SIZE rcvd: 54
The response code is NOERROR and the ANSWER is 1 meaning that the DNS go-demo-db responded correctly. It is reachable.
We can also observe that the go-demo-db DNS is associated with the IP 10.0.0.2. Every service attached to a network gets its IP. Please note that I said service, not an instance. That’s a huge difference that we'll explore later. For now, it is important to understand that all services that belong to the same network are accessible through service names:
Let's move up through the requirements.