- The DevOps 2.1 Toolkit:Docker Swarm
- Viktor Farcic
- 694字
- 2021-07-09 21:03:38
Deploying services to the Swarm cluster
Before we deploy a demo service, we should create a new network so that all containers that constitute the service can communicate with each other no matter on which nodes they are deployed:
docker network create --driver overlay go-demo
The next chapter will explore networking in more details. Right now, we'll discuss and do only the absolute minimum required for an efficient deployment of services inside a Swarm cluster.
We can check the status of all networks with the command that follows:
docker network ls
The output of the network ls command is as follows:
NETWORK ID NAME DRIVER SCOPE
e263fb34287a bridge bridge local
c5b60cff0f83 docker_gwbridge bridge local
8d3gs95h5c5q go-demo overlay swarm
4d0719f20d24 host host local
eafx9zd0czuu ingress overlay swarm
81d392ce8717 none null local
As you can see, we have two networks that have the swarm scope. The one named ingress was created by default when we set up the cluster. The second go-demo was created with the network create command. We'll assign all containers that constitute the go-demo service to that network.
The next chapter will go deep into the Swarm networking. For now, it is important to understand that all services that belong to the same network can speak with each other freely.
The go-demo application requires two containers. Data will be stored in MongoDB. The back-end that uses that DB is defined as vfarcic/go-demo container.
Let's start by deploying the mongo container somewhere within the cluster.
Usually, we'd use constraints to specify the requirements for the container (example: HD type, the amount of memory and CPU, and so on). We'll skip that, for now, and tell Swarm to deploy it anywhere within the cluster:
docker service create --name go-demo-db \
--network go-demo \
mongo:3.2.10
Please note that we haven't specified the port Mongo listens to 27017. That means that the database will not be accesible to anyone but other services that belong to the same go-demo network .
As you can see, the way we use service create is similar to the Docker run command you are, probably, already used to.
We can list all the running services:
docker service ls
Depending on how much time passed between service create and service ls commands, you'll see the value of the REPLICAS column being zero or one. Immediately after creating the service, the value should be 0/1, meaning that zero replicas are running, and the objective is to have one. Once the mongo image is pulled, and the container is running, the value should change to 1/1.
The final output of the service ls command should be as follows (IDs are removed for brevity):
NAME MODE REPLICAS IMAGE
go-demo-db replicated 1/1 mongo:3.2.10
If we need more information about the go-demo-db service, we can run the service inspect command:
docker service inspect go-demo-db
Now that the database is running, we can deploy the go-demo container:
docker service create --name go-demo \
-e DB=go-demo-db \
--network go-demo \
vfarcic/go-demo:1.0
There's nothing new about that command. The service will be attached to the go-demo network. The environment variable DB is an internal requirement of the go-demo service that tells the code the address of the database.
At this point, we have two containers (mongo and go-demo) running inside the cluster and communicating with each other through the go-demo network. Please note that none of them is yet accessible from outside the network. At this point, your users do not have access to the service API. We'll discuss this in more details soon. Until then, I'll give you only a hint: you need a reverse proxy capable of utilizing the new Swarm networking.
Let's run the service ls command one more time:
docker service ls
The result, after the go-demo service is pulled to the destination node, should be as follows (IDs are removed for brevity):
NAME MODE REPLICAS IMAGE
go-demo replicated 1/1 vfarcic/go-demo:1.0
go-demo-db replicated 1/1 mongo:3.2.10
As you can see, both services are running as a single replica:
What happens if we want to scale one of the containers? How do we scale our services?