Bahmni on Docker Swarm

Docker Swarm is a native clustering and orchestration solution for Docker containers. It enables the creation and management of a swarm of Docker nodes that act as a single virtual system. Docker Swarm provides several benefits, including high availability, scalability, and load balancing, making it ideal for large-scale containerized deployments.

Bahmni runs on Docker with Docker Compose where each service in the Bahmni stack is defined as a separate container and can be started or stopped individually. Docker Compose makes it easy to configure and launch the entire Bahmni stack with a single command on a single node. However, running Bahmni on Docker Swarm involves defining the Bahmni services as Docker Swarm services. These services can be distributed across multiple Docker Swarm nodes, providing high availability and load balancing capabilities. Docker Swarm also provides additional features such as rolling updates and scaling that make it ideal for production deployments.

Bahmni also provides a Kubernetes solution (see Bahmni on AWS Kubernetes and Bahmni on Kubernetes minikube for Development), another popular container orchestration tool. However, Docker Swarm's simplicity and tighter integration with Docker make it an easier and more straightforward choice for small to medium-sized deployments. Kubernetes, on the other hand, provides a wider range of advanced features and is better suited for larger, more complex deployments. Ultimately, the choice between Docker Swarm and Kubernetes will depend on the specific needs and resources of your deployment.

Please note that this documentation only provides a solution for leveraging the existing Docker Compose Bahmni setup to be used on Docker Swarm. As of now, Bahmni on Docker Swarm is not officially supported. We recommend reaching out to the Bahmni community or the Bahmni support team for assistance and support in case of any issues or feedback.

 

The overall process

 

  • The first step to deploy multi-container applications across a Swarm is to set up a Docker Swarm Cluster.

  • Once a Docker Swarm is set up, we can re-use our docker compose config to deploy and manage Bahmni containers across the Swarm.

  • The docker stack deploy command should be run on the Swarm manager node, passing in the name of the Docker Compose file as a parameter, to deploy Bahmni. Docker Swarm will create the necessary containers, networks, and volumes across the Swarm.

It should be noted that deploying Bahmni on a Swarm cluster is not a straightforward process due to the way Bahmni's Docker Compose setup is configured. However, this guide will outline the challenges you may encounter at each step of the process and offer solutions to overcome them.

Setting up the Swarm Cluster

Setting up a Docker Swarm cluster is the first step in deploying Bahmni on Docker Swarm. A node is an instance of the Docker engine participating in the swarm. In a production environment, swarm deployments typically include Docker nodes distributed across multiple physical and cloud machines.

In a Docker Swarm cluster, there are two types of nodes: manager nodes and worker nodes. Manager nodes are responsible for managing the Swarm, orchestrating tasks, and managing worker nodes. They are also responsible for maintaining the state of the Swarm, scheduling tasks, and managing the Swarm's configuration. Worker nodes are responsible for running tasks assigned by the Swarm manager, such as running containers, and reporting back the status of those tasks.

To manage the global state of the cluster, the manager nodes implement the Raft Consensus Algorithm, ensuring that all the nodes are storing the same consistent state. To function properly, the Raft algorithm requires a majority or quorum of (N/2)+1 members to agree on proposed values. For instance, if a cluster has five managers, and three become unavailable, the system cannot process any more requests to schedule additional tasks. The existing system will continue running, but the scheduler cannot rebalance tasks to cope with failures.

The Manager Nodes must have a static IP address and advertise it to other nodes in the swarm. If a manager node gets a new IP, it becomes impossible for any older node to contact it at its previous IP address.

Unlike manager nodes, worker nodes can have dynamic IP addresses. If a worker node goes down, it can retrieve the join command, including the join token, to rejoin the swarm.

Create the swarm

Prerequisite: Getting started with Swarm mode

  1. On manager, initialize the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>

    Make a note of the text that is printed, as this contains the token that you will use to join worker nodes to the swarm. It is a good idea to store the token in a password manager.

  2. On worker nodes, join the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    docker swarm join --token <TOKEN> \ --advertise-addr <IP-ADDRESS-OF-WORKER-1> \ <IP-ADDRESS-OF-MANAGER>:2377

You can now list all the nodes connected in the swarm cluster by running the following command on the manager node

docker node ls

Deploying Bahmni using Docker Stack Deploy

When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file. The docker stack deploy command supports any Compose file of version "3.0" or above.

Bahmni's compose file can be deployed with docker stack deploy command. However, the way Bahmni's compose file is configured, it uses a .env file to maintain all the environment variables. Docker stack deploy does not accept the .env file directly, as reading a .env file is a property of Docker Compose, not of Docker stack deploy.

To make use of the existing .env file, we need to use a few commands. Here are the steps to deploy Bahmni using Docker Stack Deploy (using bahmni-lite as example):

  1. Clone the bahmni-docker repository

  2. Navigate to bahmni-lite:

  3. Run the following command to export the variables from the .env file to the shell:

    This command removes all commented out lines and lines containing asterisks in the value, as docker stack deploy does not accept .env files directly.

Any environment variables with asterisks in the value, such as MART_CRON_TIME='*/15 * * * *', will not be exported by this command as the asterisk character is interpreted as a wildcard character, which causes issues with the export command.

We can manually export it as follows:

Deploying the stack with docker compose

To deploy the stack using Docker stack deploy with Docker Compose, the following changes need to be made to the Docker Compose file:

  1. Comment out the "profiles" property: Docker stack deploy ignores some properties of Docker Compose on its own, and the "profiles" property is not compatible with stack deploy. When the "profiles" property is present in the Docker Compose file, we found that an error occurs stating that "Additional property profiles is not allowed". To resolve this, comment out the "profiles" property in the Docker Compose file.
    One alternative to using profiles in Docker Compose is to use multiple Compose files to manage different configurations for your services. Read Extend

  2. Define networks for your services: By default, when we use docker compose, Docker will create a default network for all services defined in your docker-compose.yml file. However, when using docker stack deploy, we need to explicitly define the networks in docker-compose.yml file.
    To define a network in compose file, add the networks property to the service definition, specifying the name of the network you want to create. For example, let's create a network called bahmni for the proxy service. We can add the following lines to the proxy service in docker-compose.yml file:

    Next, we will define the network in compose file

    This creates an overlay network called bahmni that spans across all nodes in the swarm. The overlay network driver allows for automatic service discovery and load balancing across nodes in the swarm and services will make use of this to communicate with each other. Read Manage swarm service networks

  3. Define deploy property in compose file: The deploy property in a Docker Compose file allows us to define how the services are deployed in a Swarm cluster. It contains several sub-properties that we can use to configure different aspects of the deployment, such as the number of replicas, resource constraints, and placement constraints. In the example below, we set the replicas value to 3 and placement constraint that ensures the service only runs on a node with the manager role for the bahmni-web service:

This is how docker-compose.yml would look like after making the above changes:

Feel free to make changes as per your needs.

Finally, we can deploy the stack with the command:

Volumes

By default, Docker uses the “local” volume driver to create volumes automatically when a task is scheduled on a particular host. However, this approach may not be suitable for our use-case where services may be scheduled on different nodes. In such cases, Docker may create new volumes on each node, new volumes starting from fresh and resulting in an inconsistent state.

To avoid this issue, we can use third-party persistence solutions such as EFS, EBS, GlusterFS, ISCSI, or SSHFS to ensure that volumes are shared across all nodes in the cluster. By using these solutions, we can ensure that services can access the same data regardless of the node they are scheduled on, leading to a more consistent and reliable system.

Resources:

Setting up on physical servers - Gluster Docs
What is Amazon Elastic File System? - Amazon Elastic File System

 

The Bahmni documentation is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)