Bahmni LITE Infra-cost estimates (based on Performance testing)

All the estimates projected are based on tests performed for 24 hours on single Kubernetes namespace/single cluster with multiple concurrent users and a shared RDS for Bahmni Lite 1.0 release.

Cost Estimation Sheet with Results

Comprehensive test results and calculation estimates can be viewed in this google sheet document: Bahmni Lite Setup and Cost Estimation

The below analysis explains the various design factors that were taken into consideration during the performance test calculation process.

Smaller cluster (approx 40 concurrent users)

t3.large instance is suggested for Bahmni lite implementation since the number of active pods may vary from “14 to 22” based on the requirement. t3.large supports upto 35 kubernetes pods and enables seamless deployments.

Hardware Used For Testing

Node (EC2: t3-large)

  • RAM 8GB

  • 2 vCPU

  • AWS LINUX x86_64

  • Max pods supported: 35

Database (AWS RDS service: db.t3.xlarge)

  • RAM 16GB,

  • 4 vCPU (2 core, 2.5 GHz Intel Scalable Processor)

  • 100GB Secondary storage

  • MySQL, max_connections = 1304

Report

https://bahmni.github.io/performance-test/longduration_report-20230130141239257_40users_24hrs_all_omods_afterhipfix/index.html

Patient Records in DB (prior to running the test): 75000

The given hardware supported 40 concurrent users during our test with acceptable performance. The clinical activities performed under the test were

Activities

Total performed

Single user performed

Patient Created

5760

720

Patient Search

8640

720

Patient Consultation

5760

288

Patient Document Upload

1440

720

Based on the above numbers & hardware, we recommend the following:

  1. If the expected patient / traffic load matches the test data then, on the same hardware you can run up to 20 clinics (2 users per clinic) or 10 clinics (4 users per clinic).

  2. If the expected patient / traffic load is LESS than 75% of the above test, then you can likely run up to 24+ clinics (with 2 users-per-clinic) or 12+ clinics (4 users per clinic).

Larger cluster (approx 70 concurrent users)

m5.xlarge instance is suggested for Bahmni lite implementation on this setup since both the CPU and RAM availability is twice than t3.large and supports upto 58 kubernetes pods.

 

Hardware

Node (EC2: m5-xlarge)

  • RAM 16GB

  • 4 vCPU

  • AWS LINUX x86_64

  • Max pods supported: 58

Database (AWS RDS service: db.t3.xlarge)

  • RAM 16GB,

  • 4 vCPU (2 core, 2.5 GHz Intel Scalable Processor)

  • 100GB Secondary storage

  • MySQL, max_connections = 1304

Report

https://bahmni.github.io/performance-test/longduration_report-20230213133118638_70users_24hours_AfterHIPfix_m5xlarge/index.html

Patient Records in DB prior to running the tests: 90000

The given hardware supported 70 concurrent users during our test with acceptable performance. The clinical activities performed under the test were

Activities

Total performed

Single user performed

Patient Created

10080

720

Patient Search

14400

720

Patient Consultation

10080

288

Patient Document Upload

2160

720

Based on the above numbers & hardware, we recommend the following:

  1. If the expected patient / traffic load matches the test data then, on the same hardware you can run up to 35 clinics (2 users per clinic) or 17 clinics (4 users per clinic).

  2. If the expected patient / traffic load is LESS than 75% of the above test, then you can likely run up to 40+ clinics (2 users-per-clinic) or 20 clinics (4 users per clinic).

NOTE: The number of users assumed are based on the number of requests sent by a user on an average to the server and typical clinic setup observed from real time sources.

100+ Concurrent Users Long Duration Test Data

For detailed Test results and more scenarios please see this wiki page: https://bahmni.atlassian.net/wiki/spaces/BAH/pages/3110568005

The Bahmni documentation is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)