Docker Swarm is a clustering and scheduling tool for Docker containers. With Docker Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system. In the context of Swarm, a cluster is a pool of Docker hosts that acts a bit like a single large Docker host. Instead of the hassle of having to decide which host to start every container on, we can tell Swarm to start our containers.
Clustering is an important feature for container technology, because it creates a cooperative group of systems that can provide redundancy, enabling Docker Swarm failover if one or more nodes experience an outage. A Docker Swarm cluster also provides administrators and developers with the ability to add or subtract container iterations as computing demands change.
- Docker Engine command line can be used to manage docker swarm without any additional software.
- Entire Swarm can be build from a single disk image.
- Swarm manager nodes assign each service in the swarm a unique DNS name and load balances running containers. Every container running in the swarm through a DNS server embedded in the swarm.
- The communications between containers are secured using TLS encryption.
To enable the Swarm mode, it is essential to install Docker Engine
The following ports must be available. On some systems, these ports are open by default.
- TCP port 2377 for cluster management communications
- TCP and UDP port 7946 for communication among nodes
- TCP and UDP port 4789 for overlay network traffic
Creating a Swarm
1. Run the following command to create a new swarm:
docker swarm init --advertise-addr MANAGER-IP
#docker swarm init --advertise-addr 192.168.1.172
Swarm initialized: current node (ngnzo689geh1qfrv0ldj0gy5r) is now a manager. To add a worker to this swarm, run the following command: docker swarm join –token SWMTKN-1-67z9sbnrruzo5s8tnnrorcf85f2iof853lmc18o6w9ndte20wb-1n718wwmkbansoi0lrkj36er7 192.168.1.172:2377 To add a manager to this swarm, run ‘docker swarm join-token manager’ and follow the instructions.
--advertise-addr: flag configures the manager node to publish its address as 192.168.1.172
The nodes joining the swarm must be able to access the manager at the IP address.
The output shows the commands for node machines to the swarm manager.
2. The command for node machines to join the Swarm as seen in the above example is,
#docker swarm join --token SWMTKN-1-4b6qkg951n22jc1f4fjfg5xcjks9jvquj1i5h8l8ja0tasglz0-7ju8db7nwotmirxonsipi6hpg 192.168.1.172:2377
docker node ls command will display all the nodes in the swarm
docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS 0j6pss3z5kbu85710sswywx58 * dev.sysally.com Ready Active Leader tvexppkk4l3j3eha0n6qoqlq1 dockernode.dev.sysally.com Down Active ww5j3htclpjqcud571amyooi8 SysAlly-Workstation Ready Active
*next to the node id indicates that you’re currently connected on this node.
Deploying a service to the Docker Swarm
A service is created to deploy an application image when Docker Engine is in swarm mode. A service will be an image for deployment of some larger application. The services can be any type of executable program examples of which may include an HTTP server, a database etc. that you wish to run in a distributed environment.
To deploy a service to the docker swarm, SSH to the manager machine (the leader machine). The
docker service create will deploy service to the docker swarm.
# docker service create --replicas 2 --name ubuntu_test ubuntu bash o3qu4ro4kudeg1mflydmjxrf9
When a service is deployed to the swarm, the swarm manager accepts the service definition as the desired state for the service. It schedules the service on nodes in the swarm as one or more replica tasks. Once the service is deployed, these tasks runs independently on all nodes. Note that, each task will invoke an independent container in all nodes in the swarm.
In the above example, a docker service with ubuntu image is created on all the swarm nodes with 2 replicas.
The command ‘
docker service create --replicas 2 --name ubuntu_test ubuntu bash’
docker service create : used to create a docker service in the swarm
--replicas 2 : will direct the docker engine to keep 2 replicas in the node machines
--name : flag used to instruct the name that is to be given to the service
ubuntu_test : the name given to the service followed by name flag
ubuntu : the name of the docker image that is to be used (eg: centos, nginx, php etc.).
bash : the command we wish to run in the container.
docker ps -a on the host machine will display all the services started in the host machine.
docker ps command in the node will list the services created in the node machine.
This will list the two replicates of ubuntu instances created as a result of the docker create command issued in the manager machine.
To be more clear docker service ps ubuntu_test will list all the services from all nodes and the manager
The above screenshot shows that the ubuntu image is deployed to all swarm with two replicas. This is an example to deploy swarm wide containers from master node.
High Availability Docker Swarm Nginx Container
High availability (HA) is keeping the containers available all time, even at a time of failure. This is made possible by creating multiple instances of the same container. It is not just about getting the containers online, but bringing back the container which failed.
For creating a high availability container in the Docker Swarm, we need to deploy a docker service to the swarm with nginx image. This can be done by using
docker swarm create command as specified above.
# docker service create --name nginx --publish 80:80 nginx
This would create nginx container in the host machine only, as no replicas are created. Please see the screenshots below:
On the host machine:
On first node:
On second node:
The below screenshot shows that the three nodes are added to the swarm,
Loading the IP’s from the browser will show the nginx welcome page, the webpage will be available all over the swarm. This means loading the IP’s of both the swarm manager and swarm node will display the webpage. This is due to the availability of the webpage throughout the swarm. Please see the below screenshot,
The IP of the manager node is loaded as shown in screenshot below:
The swarm node 1 IP is loaded in the browser as in the screenshot below:
The same nginx instance can be scaled to create multiple containers to setup high availability containers. Initially, to accomplish this, the created containers are removed using
docker service rm command.
[root@docker-manager ~]# docker service rm webserver
There are various switches that can be used along with
docker service create command. One such switch is the
--replicas. The replicas switch will create the replicas of the current service by the number specified along with the command, say,
In this case, let us create an nginx swarm container service with 3 replicas. The nginx containers will be created all over the swarm with a load balanced, highly available containers.
The docker container with three replicas can be created using the following command:
#docker service create --replicas 3 --name webserver --publish 80:80 nginx
[root@docker-manager ~]# docker service create --replicas 3 --name webserver --publish 80:80 nginx
[root@docker-manager ~]# docker service ls ID NAME MODE REPLICAS IMAGE 5qakppa3it7k webserver replicated 3/3 nginx:latest
The following screenshots shows the 3 replicas of nginx server in the manager machine and the two nodes.
The nginx container running on the master node is shown below:
See that, the container name is created as
webserver.1.**, which indicates it is the first replica.
The container running on the first node is shown below;
This is the second replica of the nginx server and hence the naming
The container running on the other node,
This is the third replica running on the second node. The name of the container is
Now, let’s see how the containers working in a load balanced environment. To show the differences, I will log in to the shell of the containers created earlier in both the manager and node. Then, I can edit the document root of both the containers to show the differences and illustrate how they are loaded in the load balanced environment.
For editing the document root of the default webpage shown, we need to get in to the shell of the containers. To accomplish this, we can use the
docker exec -itcommand.
To get in to the shell of the container, we need to know the container ID. This can be found out by using
The container ID in this case is ‘
aaa6e5e585e8’. To attach to this container execute
docker exec -it. See below,
#docker exec -it aaa6e5e585e8 /bin/bash
The above command directs docker to run the command to bash and this will display the shell of the container. Any changes to the container and nginx can be made from here.
In order to keep this container running, press Ctrl+P+Q, this will take you out of the container to the shell of host machine keeping the container running in the background. To exit the container fully, just type exit and enter. This will kill the container and will create another nginx container readily.
I have made some changes in the default
index.htmlfile of both the docker containers to show the differences and illustrate the load balancing.
In the above screenshot, I have loaded the IP of the manager machine i.e. 18.104.22.168 and this displays the default page from the Docker Manager. Upon refreshing the page without changing the IP will display the content from Docker Node. Thus load balanced!
From the above screenshot, it is evident that the IP is not changed but the web page is changed per request. This is how the docker load balancing work, the load balancing is done internally by the docker engine without any extra piece of software or hardware. This is an example how a load balancing cluster can be created.
The Docker Swarm involves less infrastructure and resources compared to other technologies used to accomplish the same type of infrastructure.
Thanks for dropping by. Ready for the next blog?