Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The Monitoring and Alerting stack involves use of different components for metrics collections, storage, indexing and visualisation as described below.

  • Node Exporter → Exports hardware level metrics about CPU, Memory, DIsk etc.

  • cAdvisor → Exports container level metrics about CPU, Memory, Network etc.

  • Blackbox Exporter → Monitors HTTP Endpoits for status code, latency etc.

  • Prometheus → Scrapes metrics from different agents and targets, indexes and stores it. Also manages retention of the metrics

  • Grafana → Helps visualise metrics from different datasources like Prometheus, Loki etc.

Note: Bahmni Observability includes only docker compose configurations for running these components and no customisations are done. So implementers can refer to offical docs of these components for additional conigurations and extensions.

All the below configurations have been extensively tested on Linux platforms (specifically Ubuntu). If you are using them on other platforms such as Windows or Mac, there might be instances where certain metrics may not be scraped properly due to filesystem changes and restrictions. Therefore, it is advisable to refer to the official documentation and adjust any configuration parameters accordingly for other hosts.

Table of Contents

Running agents for metrics on Bahmni Application Server

The first step of setting the monitoring stack involves running the required agents on the node which can export metrics such as CPU, memory, disk usage etc.

If you are thinking of managing multiple Bahmni instances with a single monitoring then the agents needs to be run on every Bahmni Server Instance

Step 1: Clone the bahmni-observability repository

git clone https://github.com/Bahmni/bahmni-observability

Step 2: Running the agents

Navigate into the root of the repository that has been cloned with the above step and then start the monitoring-metrics profile. Update the value of COMPOSE_PROFILES in the .env file to monitoring-metrics.

cd bahmni-observability
docker compose --profile monitoring-metrics up -d

The above commands will pull the docker images for node-exporter and cAdvisor and start the services. There are two ports that are exposed for the services which exports the metrics. From a browser of your choice you can hit http://<ip>:8100/metrics to get the container metrics and http://<ip>:9100/metrics to get the node metrics.

Step 3: Firewall rules for the ports exposed by the agents

As mentioned above, the agents expose their metrics on ports 8100 and 9100. So depending on your installation add firewall rules which will allow TCP connections from the monitoring server. This step is not needed when you run the monitoring stack on the same machine as the bahmni application server.

The default setup does not include any authentication for these metrics endpoints. Therefore, we suggest implementing firewall rules to restrict access only to the specific IP address of the monitoring server.

Running the monitoring services on Monitoring Server

The subsequent phase of the configuration involves executing the Prometheus and Grafana services, which will collect metrics from the agents and facilitate the visualization of these metrics through sophisticated dashboards.

Step 1: Clone the bahmni-observability repository

git clone https://github.com/Bahmni/bahmni-observability

Note: Skip this step if you are running the monitoring stack on the same server along with Bahmni as you would have the repo cloned in the previous steps

Step 2: Configuring the agent details

The IP address and port of the instance where the agents are running must be configured in the prometheus.yml file. This configuration enables Prometheus to scrape metrics from the agents. For additional configuration details, please refer to the documentation here.

To configure the host details, you need to find the IP address on the server where the agents are running. You can use the ifconfig command to find the IP. If you are running the monitoring stack on the same machine then use the IP of the local network assigned to the server. Note: You can also use the FQDN for your server if you have the same configured in your firewall tooling.

  1. Open the bahmni-observability/config/prometheus.yml file in a text editor of your choice

  2. Add '<ip>:8100' under the targets section of the cadvisor job.

  3. Add '<ip>:9100' under the targets section of the node job
    Note: The target is an YAML list where you can specify multiple hosts and manage, visualise metrics from different servers on the same monitoring stack.

  4. Remember to add the details of both the application server and the monitoring server to the configuration file.

  5. There is a job named blackbox_exporter which allows to monitor HTTP endpoints. More about blackbox exporter can be read from here. Under the targets, specify the applications that you have deployed at your implementation.
    Example:

    - https://10.0.1.100/openmrs
    - https://10.0.1.100/openelis
    - http://10.0.1.100:8069
  6. Save the edits on the configuration file.

  7. Refer the snippet below for a sample configured file which assumes Bahmni Application Server running with IP 10.0.1.100 and the monitoring server running with IP 10.0.2.100

 Sample Prometheus Configuration file
global:
  scrape_interval: 15s

scrape_configs:
- job_name: prometheus
  static_configs:
  - targets:
    - prometheus:9090

- job_name: 'blackbox_exporter'
  static_configs:
  - targets: 
    - 'blackbox-exporter:9115'

- job_name: cadvisor
  static_configs:
  - targets:
    - '10.0.1.100:8100'
    - '10.0.2.100:8100' 

- job_name: 'node'
  static_configs:
  - targets:
    - '10.0.1.100:9100'
    - '10.0.2.100:9100' 

- job_name: 'blackbox'
  scrape_interval: 1m
  metrics_path: /probe
  params:
    module: [http_2xx]
  relabel_configs:
    - source_labels: [__address__]
      target_label: __param_target
    - source_labels: [__param_target]
      target_label: instance
    - target_label: __address__
      replacement: blackbox-exporter:9115
  static_configs:
  - targets:
    - 'https://10.0.1.100/openmrs'
    - 'https://10.0.1.100/openelis'
    - 'http://10.0.1.100:8069' 

Step 3: Starting the services

Navigate into the root of the repository that has been cloned with the above step and then start the monitoring-metrics profile. Update the value of COMPOSE_PROFILES in the .env file to monitoring-metrics,monitoring.

cd bahmni-observability
docker compose --profile monitoring-metrics --profile monitoring up -d

The above commands will pull the docker images for grafana, prometheus and nginx (used as a proxy).

After a few minutes you can access Grafana at http://<ip>:81 (The IP should be of the monitoring server). The default credentials will be admin/admin. Grafana will prompt for password change on first login. We highly recommend updating a strong password for admin user. Then you can create additional users on Grafana.

Step 4: Adding SSL for Grafana access (Optional)

The SSL termination for access to Grafana is achieved by using a proxy service. The out of the box configuration ships with nginx for the same and the service is named as monitoring-proxy.

Generate or get an SSL certificate for you domain. Copy the certifcate fullchain certificate as cert.pem and key as key.pem to a directory on the monitoring server.

  1. Update the directory path to the CERTIFICATE_PATH variable in the .env file

  2. Uncomment the volume for CERTIFCATE_PATH in the volumes section of monitoring-proxy service in docker-compose.yml

  3. Open the bahmni-observability/config/default.conf.template file in a text editor

  4. Uncomment the following lines in the file

        #listen 443 ssl default_server;
        #ssl_certificate /etc/ssl/certs/cert.pem;
        #ssl_certificate_key /etc/ssl/certs/key.pem;
        #add_header Strict-Transport-Security "max-age=31536000 ; includeSubDomains" always;

  5. Uncomment the 445 port configuration in the ports section of the monitoring-proxy service in docker-compose.yml

  6. Update the services by running docker compose --profile monitoring up -d monitoring-proxy

Now you can access Grafana at https://<ip>:445

Importing Dashboards in Grafana

Grafana has the provision to Import community dashboards into your instance of Grafana. There are a lot of grafana dashboards available here which can be imported. You can also create your custom dashboards in Grafana. The below are few of the recommended dashboards to be used for the three different exporters that ships out of the box.

  1. Node Exporter Full → 1860

  2. cAdvisor Exporter → 14282

  3. Blackbox Exporter Minimal View → 20168

To Import any Dashboard, Navigate to Dashboards → New → Import → Enter ID → Select Prometheus Datasource → Load. Read about the steps here.

Once you have imported, you can visualise different node and container metrics in the dashboards.

Sample Dashboard Screenshots

  • No labels