Bahmni on Docker Swarm

Docker Swarm is a native clustering and orchestration solution for Docker containers. It enables the creation and management of a swarm of Docker nodes that act as a single virtual system. Docker Swarm provides several benefits, including high availability, scalability, and load balancing, making it ideal for large-scale containerized deployments.

Bahmni runs on Docker with Docker Compose where each service in the Bahmni stack is defined as a separate container and can be started or stopped individually. Docker Compose makes it easy to configure and launch the entire Bahmni stack with a single command on a single node. However, running Bahmni on Docker Swarm involves defining the Bahmni services as Docker Swarm services. These services can be distributed across multiple Docker Swarm nodes, providing high availability and load balancing capabilities. Docker Swarm also provides additional features such as rolling updates and scaling that make it ideal for production deployments.

Bahmni also provides a Kubernetes solution (see Bahmni on AWS Kubernetes and Bahmni on Kubernetes minikube for Development), another popular container orchestration tool. However, Docker Swarm's simplicity and tighter integration with Docker make it an easier and more straightforward choice for small to medium-sized deployments. Kubernetes, on the other hand, provides a wider range of advanced features and is better suited for larger, more complex deployments. Ultimately, the choice between Docker Swarm and Kubernetes will depend on the specific needs and resources of your deployment.

Please note that this documentation only provides a solution for leveraging the existing Docker Compose Bahmni setup to be used on Docker Swarm. As of now, Bahmni on Docker Swarm is not officially supported. We recommend reaching out to the Bahmni community or the Bahmni support team for assistance and support in case of any issues or feedback.


The overall process


  • The first step to deploy multi-container applications across a Swarm is to set up a Docker Swarm Cluster.

  • Once a Docker Swarm is set up, we can re-use our docker compose config to deploy and manage Bahmni containers across the Swarm.

  • The docker stack deploy command should be run on the Swarm manager node, passing in the name of the Docker Compose file as a parameter, to deploy Bahmni. Docker Swarm will create the necessary containers, networks, and volumes across the Swarm.

It should be noted that deploying Bahmni on a Swarm cluster is not a straightforward process due to the way Bahmni's Docker Compose setup is configured. However, this guide will outline the challenges you may encounter at each step of the process and offer solutions to overcome them.

Setting up the Swarm Cluster

Setting up a Docker Swarm cluster is the first step in deploying Bahmni on Docker Swarm. A node is an instance of the Docker engine participating in the swarm. In a production environment, swarm deployments typically include Docker nodes distributed across multiple physical and cloud machines.

In a Docker Swarm cluster, there are two types of nodes: manager nodes and worker nodes. Manager nodes are responsible for managing the Swarm, orchestrating tasks, and managing worker nodes. They are also responsible for maintaining the state of the Swarm, scheduling tasks, and managing the Swarm's configuration. Worker nodes are responsible for running tasks assigned by the Swarm manager, such as running containers, and reporting back the status of those tasks.

To manage the global state of the cluster, the manager nodes implement the Raft Consensus Algorithm, ensuring that all the nodes are storing the same consistent state. To function properly, the Raft algorithm requires a majority or quorum of (N/2)+1 members to agree on proposed values. For instance, if a cluster has five managers, and three become unavailable, the system cannot process any more requests to schedule additional tasks. The existing system will continue running, but the scheduler cannot rebalance tasks to cope with failures.

The Manager Nodes must have a static IP address and advertise it to other nodes in the swarm. If a manager node gets a new IP, it becomes impossible for any older node to contact it at its previous IP address.

Unlike manager nodes, worker nodes can have dynamic IP addresses. If a worker node goes down, it can retrieve the join command, including the join token, to rejoin the swarm.

Create the swarm


  1. On manager, initialize the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    docker swarm init --advertise-addr=<IP-ADDRESS-OF-MANAGER>

    Make a note of the text that is printed, as this contains the token that you will use to join worker nodes to the swarm. It is a good idea to store the token in a password manager.

  2. On worker nodes, join the swarm. If the host only has one network interface, the --advertise-addr flag is optional.

    docker swarm join --token <TOKEN> \ --advertise-addr <IP-ADDRESS-OF-WORKER-1> \ <IP-ADDRESS-OF-MANAGER>:2377

You can now list all the nodes connected in the swarm cluster by running the following command on the manager node

docker node ls

Deploying Bahmni using Docker Stack Deploy

When running Docker Engine in swarm mode, you can use docker stack deploy to deploy a complete application stack to the swarm. The deploy command accepts a stack description in the form of a Compose file. The docker stack deploy command supports any Compose file of version "3.0" or above.

Bahmni's compose file can be deployed with docker stack deploy command. However, the way Bahmni's compose file is configured, it uses a .env file to maintain all the environment variables. Docker stack deploy does not accept the .env file directly, as reading a .env file is a property of Docker Compose, not of Docker stack deploy.

To make use of the existing .env file, we need to use a few commands. Here are the steps to deploy Bahmni using Docker Stack Deploy (using bahmni-lite as example):

  1. Clone the bahmni-docker repository

    git clone
  2. Navigate to bahmni-lite:

    cd bahmni-docker/bahmni-lite
  3. Run the following command to export the variables from the .env file to the shell:

    export $(sed '/^#/d; /\*.*\*/d' .env)

    This command removes all commented out lines and lines containing asterisks in the value, as docker stack deploy does not accept .env files directly.

Any environment variables with asterisks in the value, such as MART_CRON_TIME='*/15 * * * *', will not be exported by this command as the asterisk character is interpreted as a wildcard character, which causes issues with the export command.

We can manually export it as follows:

export MART_CRON_TIME='*/15 * * * *'

Deploying the stack with docker compose

To deploy the stack using Docker stack deploy with Docker Compose, the following changes need to be made to the Docker Compose file:

  1. Comment out the "profiles" property: Docker stack deploy ignores some properties of Docker Compose on its own, and the "profiles" property is not compatible with stack deploy. When the "profiles" property is present in the Docker Compose file, we found that an error occurs stating that "Additional property profiles is not allowed". To resolve this, comment out the "profiles" property in the Docker Compose file.
    One alternative to using profiles in Docker Compose is to use multiple Compose files to manage different configurations for your services. Read

  2. Define networks for your services: By default, when we use docker compose, Docker will create a default network for all services defined in your docker-compose.yml file. However, when using docker stack deploy, we need to explicitly define the networks in docker-compose.yml file.
    To define a network in compose file, add the networks property to the service definition, specifying the name of the network you want to create. For example, let's create a network called bahmni for the proxy service. We can add the following lines to the proxy service in docker-compose.yml file:

    proxy: networks: - bahmni

    Next, we will define the network in compose file

    networks: bahmni: driver: overlay

    This creates an overlay network called bahmni that spans across all nodes in the swarm. The overlay network driver allows for automatic service discovery and load balancing across nodes in the swarm and services will make use of this to communicate with each other. Read

  3. Define deploy property in compose file: The deploy property in a Docker Compose file allows us to define how the services are deployed in a Swarm cluster. It contains several sub-properties that we can use to configure different aspects of the deployment, such as the number of replicas, resource constraints, and placement constraints. In the example below, we set the replicas value to 3 and placement constraint that ensures the service only runs on a node with the manager role for the bahmni-web service:

    bahmni-web: image: bahmni/bahmni-web:${BAHMNI_WEB_IMAGE_TAG:?} networks: - bahmni deploy: placement: constraints: [node.role == manager] replicas: 3

This is how docker-compose.yml would look like after making the above changes:

version: '3.7' services: proxy: image: 'bahmni/proxy:${PROXY_IMAGE_TAG:?}' #volumes: # - ${CERTIFICATE_PATH}:/etc/tls networks: - bahmni ports: - '80:80' - '443:443' deploy: replicas: 3 bahmni-config: image: 'bahmni/clinic-config:${CONFIG_IMAGE_TAG:?}' networks: - bahmni volumes: - '${CONFIG_VOLUME:?}:/usr/local/bahmni_config' deploy: replicas: 2 bahmni-lab: image: 'bahmni/bahmni-lab:${BAHMNI_LAB_IMAGE_TAG:?}' networks: - bahmni deploy: replicas: 2 openmrs: image: bahmni/openmrs:latest environment: OMRS_DB_NAME: ${OPENMRS_DB_NAME:?} OMRS_DB_HOSTNAME: ${OPENMRS_DB_HOST:?} OMRS_DB_USERNAME: ${OPENMRS_DB_USERNAME:?} OMRS_DB_PASSWORD: ${OPENMRS_DB_PASSWORD:?} OMRS_CREATE_TABLES: ${OPENMRS_DB_CREATE_TABLES} OMRS_AUTO_UPDATE_DATABASE: ${OPENMRS_DB_AUTO_UPDATE} OMRS_MODULE_WEB_ADMIN: ${OPENMRS_MODULE_WEB_ADMIN} # OMRS_DEV_DEBUG_PORT: ${OMRS_DEV_DEBUG_PORT} OMRS_JAVA_SERVER_OPTS: ${OMRS_JAVA_SERVER_OPTS} OMRS_JAVA_MEMORY_OPTS: ${OMRS_JAVA_MEMORY_OPTS} SEND_MAIL: ${SEND_MAIL} MAIL_TRANSPORT_PROTOCOL: ${MAIL_TRANSPORT_PROTOCOL} MAIL_SMTP_HOST: ${MAIL_SMTP_HOST} MAIL_SMTP_PORT: ${MAIL_SMTP_PORT} MAIL_SMTP_AUTH: ${MAIL_SMTP_AUTH} MAIL_SMTP_STARTTLS_ENABLE: ${MAIL_SMTP_STARTTLS_ENABLE} MAIL_SMTP_SSL_ENABLE: ${MAIL_SMTP_SSL_ENABLE} MAIL_DEBUG: ${MAIL_DEBUG} MAIL_FROM: ${MAIL_FROM} MAIL_USER: ${MAIL_USER} MAIL_PASSWORD: ${MAIL_PASSWORD} OMRS_DOCKER_ENV: ${OPENMRS_DOCKER_ENV} networks: - bahmni volumes: - "${CONFIG_VOLUME:?}:/etc/bahmni_config/:ro" - 'bahmni-patient-images:/home/bahmni/patient_images' - 'bahmni-document-images:/home/bahmni/document_images' - 'bahmni-clinical-forms:/home/bahmni/clinical_forms' - 'configuration_checksums:/openmrs/data/configuration_checksums' depends_on: - openmrsdb deploy: placement: constraints: [node.role == manager] openmrsdb: image: ${OPENMRS_DB_IMAGE_NAME:?} networks: - bahmni restart: always command: --character-set-server=utf8 --collation-server=utf8_general_ci environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:?} MYSQL_DATABASE: ${OPENMRS_DB_NAME:?} MYSQL_USER: ${OPENMRS_DB_USERNAME:?} MYSQL_PASSWORD: ${OPENMRS_DB_PASSWORD:?} volumes: - 'openmrsdbdata:/var/lib/mysql' - 'configuration_checksums:/configuration_checksums' deploy: replicas: 2 placement: constraints: [node.role == manager] bahmni-web: image: bahmni/bahmni-web:${BAHMNI_WEB_IMAGE_TAG:?} networks: - bahmni volumes: - "${CONFIG_VOLUME:?}:/usr/local/apache2/htdocs/bahmni_config/:ro" deploy: replicas: 3 implementer-interface: image: bahmni/implementer-interface:${IMPLEMENTER_INTERFACE_IMAGE_TAG:?} networks: - bahmni depends_on: - openmrs reports: image: bahmni/reports:${REPORTS_IMAGE_TAG:?} networks: - bahmni environment: OPENMRS_DB_HOST: ${OPENMRS_DB_HOST:?} OPENMRS_DB_NAME: ${OPENMRS_DB_NAME:?} OPENMRS_DB_USERNAME: ${OPENMRS_DB_USERNAME:?} OPENMRS_DB_PASSWORD: ${OPENMRS_DB_PASSWORD:?} OPENMRS_HOST: ${OPENMRS_HOST:?} OPENMRS_PORT: ${OPENMRS_PORT:?} REPORTS_DB_SERVER: reportsdb REPORTS_DB_NAME: ${REPORTS_DB_NAME:?} REPORTS_DB_USERNAME: ${REPORTS_DB_USERNAME:?} REPORTS_DB_PASSWORD: ${REPORTS_DB_PASSWORD:?} volumes: - "${CONFIG_VOLUME:?}:/etc/bahmni_config/:ro" depends_on: - reportsdb - openmrsdb - bahmni-web reportsdb: image: mysql:5.7 networks: - bahmni environment: MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD:?} MYSQL_DATABASE: ${REPORTS_DB_NAME:?} MYSQL_USER: ${REPORTS_DB_USERNAME:?} MYSQL_PASSWORD: ${REPORTS_DB_PASSWORD:?} patient-documents: image: 'bahmni/patient-documents:${PATIENT_DOCUMENTS_TAG:?}' networks: - bahmni volumes: - 'bahmni-document-images:/usr/share/nginx/html/document_images' environment: - OPENMRS_HOST=${OPENMRS_HOST:?} depends_on: - openmrs appointments: networks: - bahmni image: bahmni/appointments:${APPOINTMENTS_IMAGE_TAG:?} networks: bahmni: driver: overlay volumes: openmrs-data: openmrsdbdata: bahmni-patient-images: bahmni-document-images: bahmni-clinical-forms: bahmni-config: configuration_checksums:

Feel free to make changes as per your needs.

Finally, we can deploy the stack with the command:

docker stack deploy --compose-file docker-compose.yml <STACK-NAME>


By default, Docker uses the “local” volume driver to create volumes automatically when a task is scheduled on a particular host. However, this approach may not be suitable for our use-case where services may be scheduled on different nodes. In such cases, Docker may create new volumes on each node, new volumes starting from fresh and resulting in an inconsistent state.

To avoid this issue, we can use third-party persistence solutions such as EFS, EBS, GlusterFS, ISCSI, or SSHFS to ensure that volumes are shared across all nodes in the cluster. By using these solutions, we can ensure that services can access the same data regardless of the node they are scheduled on, leading to a more consistent and reliable system.



The Bahmni documentation is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)