BahmniLite Performance Baselining (AWS + Gatling)

This would be a living document to capture various baselining snapshots while :-

  • Troubleshooting and applying a patch

  • Changing Software or Network configurations

  • Adding new scenarios and changing load share

Source Code: GitHub - Bahmni/performance-test

⭕️ Automation Technology Stack

⭕️ Current Snapshots

 

Hardware

Performance environment was running on AWS EKS Custer with single node

Node (EC2: t3-large)

  • RAM 8GB

  • 2 vCPU

  • 100GB Secondary storage

  • AWS LINUX x86_64

Total 20 application pods in cluster such as openmrs, bahmni-web, postgresql, abdm etc

Database (AWS RDS service: db.t3.xlarge)

  • RAM 16GB,

  • 4 vCPU (2 core, 2.5 GHz Intel Scalable Processor)

  • 100GB Secondary storage

  • MySQL, max_connections = 1304

Software

OpenMRS Tomcat - Server

Server version: Apache Tomcat/7.0.94 Server built: Apr 10 2019 16:56:40 UTC Server number: 7.0.94.0 OS Name: Linux OS Version: 5.4.204-113.362.amzn2.x86_64 Architecture: amd64 JVM Version: 1.8.0_212-8u212-b01-1~deb9u1-b01 ThreadPool: Max 200, Min 25 (Default server.xml)

OpenMRS - Heap

  • Initial Heap: 1024 MB

  • Max Heap: 1536 MB

-Xms1024m -Xmx1536m -XX:NewSize=512m -XX:MaxNewSize=512m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=1024m -XX:InitialCodeCacheSize=64m -XX:ReservedCodeCacheSize=96m -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=40 -XX:+UseParNewGC -XX:ParallelGCThreads=2 -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark -XX:+UseGCOverheadLimit -XX:+UseStringDeduplication

Openmrs Connection Pooling

hibernate.c3p0.max_size=50 hibernate.c3p0.min_size=0 hibernate.c3p0.timeout=100 hibernate.c3p0.max_statements=0 hibernate.c3p0.idle_test_period=3000 hibernate.c3p0.acquire_increment=1

📗 Current Results - 40 Concurrent Users

  • Network: 60 Mbps

  • Duration: 24 hours

  • Ramp Up: 5 mins

  • Database pre-state: 75000 patient records

Report Link: Gatling Stats - Global Information

Report Observations:

Simulations
Scenario
Load share
Patient Count
Min Time (ms)
95th Percentile (ms)
99th Percentile (ms)
Max Time (ms)
Simulations
Scenario
Load share
Patient Count
Min Time (ms)
95th Percentile (ms)
99th Percentile (ms)
Max Time (ms)

Frontdesk

50% Traffic

New Patient Registration Start OPD Visit

40%

5760

152

484

648

1389

Existing Patient Search using ID Start OPD Visit

30%

4320

54

472

676

1977

Existing Patient Search using Name Start OPD Visit

20%

4320

119

352

507

1492

Upload Patient Document

10%

1440

142

482

581

1135

Doctor

50% Traffic

Doctor Consultation

  • 8 Observations

  • 2 Lab Orders

  • 3 Medication

100%

5760

1364

4056

4531

7291

 

Hardware

Performance environment was running on AWS EKS Custer with single node

Node (EC2: m5-xlarge)

  • RAM 16GB

  • 4 vCPU

  • 100GB Secondary storage

  • AWS LINUX x86_64

Database (AWS RDS service: db.t3.xlarge)

  • RAM 16GB,

  • 4 vCPU (2 core, 2.5 GHz Intel Scalable Processor)

  • 100GB Secondary storage

  • MySQL, max_connections = 1304

Software

OpenMRS Tomcat - Server

Server version: Apache Tomcat/7.0.94 Server built: Apr 10 2019 16:56:40 UTC Server number: 7.0.94.0 OS Name: Linux OS Version: 5.4.204-113.362.amzn2.x86_64 Architecture: amd64 JVM Version: 1.8.0_212-8u212-b01-1~deb9u1-b01 ThreadPool: Max 200, Min 25 (Default server.xml)

OpenMRS - Heap

  • Initial Heap: 1024 MB

  • Max Heap: 2536 MB

-Xms1024m -Xmx2536m -XX:NewSize=512m -XX:MaxNewSize=512m -XX:MetaspaceSize=256m -XX:MaxMetaspaceSize=1024m -XX:InitialCodeCacheSize=64m -XX:ReservedCodeCacheSize=96m -XX:SurvivorRatio=4 -XX:TargetSurvivorRatio=40 -XX:+UseParNewGC -XX:ParallelGCThreads=2 -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+CMSCompactWhenClearAllSoftRefs -XX:CMSInitiatingOccupancyFraction=85 -XX:+CMSScavengeBeforeRemark -XX:+UseGCOverheadLimit -XX:+UseStringDeduplication

Openmrs Connection Pooling


📗 Current Results - 70 Concurrent Users

Report Link: Gatling Stats - Global Information

Report Observations:

Simulations
Scenario
Load share
Patient Count
Min Time (ms)
95th Percentile (ms)
99th Percentile (ms)
Max Time (ms)
Simulations
Scenario
Load share
Patient Count
Min Time (ms)
95th Percentile (ms)
99th Percentile (ms)
Max Time (ms)

Frontdesk

50% Traffic

New Patient Registration Start OPD Visit

40%

10080

130

252

309

674

Existing Patient Search using ID Start OPD Visit

30%

7200

49

305

464

2660

Existing Patient Search using Name Start OPD Visit

20%

7200

135

278

348

573

Upload Patient Document

10%

2160

111

206

253

459

Doctor

50% Traffic

Doctor Consultation

  • 8 Observations

  • 2 Lab Orders

  • 3 Medication

100%

10080

998

2331

2608

4134

 

🔰 Observations:

  • Doctor Consultation - The maximum response times for this particular activity was pretty high under both 40 and 70 concurrent tests. The services responsible for this activity is under analysis and will be prioritised for performance improvement.

Note: The tests performed here are of most demand to users using Bahmni lite compared to the real world clinic activities. So it is assumed safe to implement for the suggested concurrent users under a cluster even when the maximum time response numbers for some activities are not optimal at the moment under test.

⭕️ Base Configuration

The below details are historical data observed during each tests with different configurations at the start of performance analysis for Bahmni lite. Kindly view them for data purposes and not as references.

 

Hardware

Performance environment was running on AWS EKS Custer with single node

Node (EC2: m5-xlarge)

  • RAM 16GB

  • 4 vCPU

  • 100GB Secondary storage

  • AWS LINUX x86_64

Database (AWS RDS service: db.t3.xlarge)

  • RAM 16GB,

  • 4 vCPU (2 core, 2.5 GHz Intel Scalable Processor)

  • 100GB Secondary storage

  • MySQL, max_connections = 1304

Software

OpenMRS Tomcat - Server

OpenMRS - Heap

  • Initial Heap: 256 MB

  • Max Heap: 768 MB

-Xms256m -Xmx768m -XX:PermSize=256m -XX:MaxPermSize=512m

Openmrs Connection Pooling

 

📙 10 Concurrent Users

Report Link: Gatling Stats - Global Information

Report Observations:

Simulations

Scenario

Load share

Patient Count

Min Time (ms)

Max Time (ms)

Simulations

Scenario

Load share

Patient Count

Min Time (ms)

Max Time (ms)

Frontdesk

50% Traffic

New Patient Registration Start OPD Visit

40%

174

100

1077

Existing Patient Search using ID Start OPD Visit

30%

106

243

1550

Existing Patient Search using Name Start OPD Visit

20%

107

243

1437

Upload Patient Document

10%

27

228

2169

Doctor

50% Traffic

Doctor Consultation

  • 8 Observations

  • 2 Lab Orders

  • 3 Medication

100%

187

185

2092

📙 25 Concurrent Users

Report Link: Gatling Stats - Global Information

🔰 Observations:

Simulations

Scenario

Load share

Patient Count

Min Time (ms)

Max Time (ms)

Simulations

Scenario

Load share

Patient Count

Min Time (ms)

Max Time (ms)

Frontdesk

50% Traffic

New Patient Registration Start OPD Visit

40%

86

161

1193

Existing Patient Search using ID Start OPD Visit

30%

74

320

1113

Existing Patient Search using Name Start OPD Visit

20%

64

310

1144

Upload Patient Document

10%

21

213

843

Doctor

50% Traffic

Doctor Consultation

  • 8 Observations

  • 2 Lab Orders

  • 3 Medication

100%

107

259

846

📙 40 Concurrent Users - Standard Traffic condition

📙 70 Concurrent Users - High Traffic condition

📙 90 Concurrent Users - Peak Traffic condition

⭕️ Pace Based Framework

The framework is based on iterations per time type volumes i.e. a dedicated pace is set for each persona simulation.

Default JVM Configuration

📙 40 Concurrent Users - Standard Traffic condition

📙 70 Concurrent Users - High Traffic condition

📙 90 Concurrent Users - Peak Traffic condition

Tuned JVM Configuration

The JVM was tuned for better efficiency from the default configuration

Configuration

📙 40 Concurrent Users - Standard Traffic condition

📙 70 Concurrent Users - High Traffic condition

📙 90 Concurrent Users - Peak Traffic condition

 

⭕️ Scenario based pace framework

The scenario based pace framework enables various pace distribution among different scenarios individually.

Configuration

📙 40 Concurrent Users - Standard Traffic condition

📙 70 Concurrent Users - High Traffic condition

📙 90 Concurrent Users - Peak Traffic condition

The Bahmni documentation is licensed under Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0)